00:00:00.000 Started by upstream project "autotest-per-patch" build number 126183 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.117 > git --version # 'git version 2.39.2' 00:00:00.117 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.695 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.707 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.719 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:03.719 > git config core.sparsecheckout # timeout=10 00:00:03.731 > git read-tree -mu HEAD # timeout=10 00:00:03.749 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:03.771 Commit message: "packer: Drop centos7" 00:00:03.771 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.889 [Pipeline] Start of Pipeline 00:00:03.901 [Pipeline] library 00:00:03.903 Loading library shm_lib@master 00:00:03.903 Library shm_lib@master is cached. Copying from home. 00:00:03.932 [Pipeline] node 00:00:03.943 Running on WFP43 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:03.945 [Pipeline] { 00:00:03.957 [Pipeline] catchError 00:00:03.959 [Pipeline] { 00:00:03.975 [Pipeline] wrap 00:00:03.988 [Pipeline] { 00:00:03.999 [Pipeline] stage 00:00:04.002 [Pipeline] { (Prologue) 00:00:04.193 [Pipeline] sh 00:00:04.523 + logger -p user.info -t JENKINS-CI 00:00:04.543 [Pipeline] echo 00:00:04.544 Node: WFP43 00:00:04.551 [Pipeline] sh 00:00:04.843 [Pipeline] setCustomBuildProperty 00:00:04.855 [Pipeline] echo 00:00:04.856 Cleanup processes 00:00:04.861 [Pipeline] sh 00:00:05.142 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.142 2239706 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.155 [Pipeline] sh 00:00:05.439 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.439 ++ grep -v 'sudo pgrep' 00:00:05.439 ++ awk '{print $1}' 00:00:05.439 + sudo kill -9 00:00:05.439 + true 00:00:05.452 [Pipeline] cleanWs 00:00:05.461 [WS-CLEANUP] Deleting project workspace... 00:00:05.461 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.468 [WS-CLEANUP] done 00:00:05.471 [Pipeline] setCustomBuildProperty 00:00:05.483 [Pipeline] sh 00:00:05.762 + sudo git config --global --replace-all safe.directory '*' 00:00:05.842 [Pipeline] httpRequest 00:00:05.864 [Pipeline] echo 00:00:05.866 Sorcerer 10.211.164.101 is alive 00:00:05.875 [Pipeline] httpRequest 00:00:05.880 HttpMethod: GET 00:00:05.880 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:05.881 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:05.899 Response Code: HTTP/1.1 200 OK 00:00:05.899 Success: Status code 200 is in the accepted range: 200,404 00:00:05.900 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:08.252 [Pipeline] sh 00:00:08.535 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:08.557 [Pipeline] httpRequest 00:00:08.582 [Pipeline] echo 00:00:08.584 Sorcerer 10.211.164.101 is alive 00:00:08.595 [Pipeline] httpRequest 00:00:08.600 HttpMethod: GET 00:00:08.601 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:08.602 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:08.604 Response Code: HTTP/1.1 200 OK 00:00:08.604 Success: Status code 200 is in the accepted range: 200,404 00:00:08.605 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:24.681 [Pipeline] sh 00:00:24.966 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:27.593 [Pipeline] sh 00:00:27.873 + git -C spdk log --oneline -n5 00:00:27.873 2728651ee accel: adjust task per ch define name 00:00:27.873 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:27.873 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:27.873 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:27.873 719d03c6a sock/uring: only register net impl if supported 00:00:27.885 [Pipeline] } 00:00:27.900 [Pipeline] // stage 00:00:27.909 [Pipeline] stage 00:00:27.911 [Pipeline] { (Prepare) 00:00:27.928 [Pipeline] writeFile 00:00:27.943 [Pipeline] sh 00:00:28.222 + logger -p user.info -t JENKINS-CI 00:00:28.236 [Pipeline] sh 00:00:28.519 + logger -p user.info -t JENKINS-CI 00:00:28.532 [Pipeline] sh 00:00:28.813 + cat autorun-spdk.conf 00:00:28.814 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.814 SPDK_TEST_NVMF=1 00:00:28.814 SPDK_TEST_NVME_CLI=1 00:00:28.814 SPDK_TEST_NVMF_NICS=mlx5 00:00:28.814 SPDK_RUN_UBSAN=1 00:00:28.814 NET_TYPE=phy 00:00:28.820 RUN_NIGHTLY=0 00:00:28.827 [Pipeline] readFile 00:00:28.857 [Pipeline] withEnv 00:00:28.859 [Pipeline] { 00:00:28.875 [Pipeline] sh 00:00:29.160 + set -ex 00:00:29.160 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:29.160 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:29.160 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.160 ++ SPDK_TEST_NVMF=1 00:00:29.160 ++ SPDK_TEST_NVME_CLI=1 00:00:29.160 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:29.160 ++ SPDK_RUN_UBSAN=1 00:00:29.160 ++ NET_TYPE=phy 00:00:29.160 ++ RUN_NIGHTLY=0 00:00:29.160 + case $SPDK_TEST_NVMF_NICS in 00:00:29.160 + DRIVERS=mlx5_ib 00:00:29.160 + [[ -n mlx5_ib ]] 00:00:29.160 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.160 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:35.733 rmmod: ERROR: Module irdma is not currently loaded 00:00:35.734 rmmod: ERROR: Module i40iw is not currently loaded 00:00:35.734 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:35.734 + true 00:00:35.734 + for D in $DRIVERS 00:00:35.734 + sudo modprobe mlx5_ib 00:00:35.734 + exit 0 00:00:35.743 [Pipeline] } 00:00:35.763 [Pipeline] // withEnv 00:00:35.769 [Pipeline] } 00:00:35.788 [Pipeline] // stage 00:00:35.802 [Pipeline] catchError 00:00:35.805 [Pipeline] { 00:00:35.824 [Pipeline] timeout 00:00:35.824 Timeout set to expire in 1 hr 0 min 00:00:35.827 [Pipeline] { 00:00:35.846 [Pipeline] stage 00:00:35.849 [Pipeline] { (Tests) 00:00:35.869 [Pipeline] sh 00:00:36.155 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.155 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.155 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:36.155 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:36.155 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:36.155 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:36.155 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:36.155 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:36.155 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:36.155 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:36.155 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:36.155 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:36.155 + source /etc/os-release 00:00:36.155 ++ NAME='Fedora Linux' 00:00:36.155 ++ VERSION='38 (Cloud Edition)' 00:00:36.155 ++ ID=fedora 00:00:36.155 ++ VERSION_ID=38 00:00:36.155 ++ VERSION_CODENAME= 00:00:36.155 ++ PLATFORM_ID=platform:f38 00:00:36.155 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.155 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.155 ++ LOGO=fedora-logo-icon 00:00:36.155 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.155 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.155 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.155 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.155 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.155 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.155 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.155 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.155 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.155 ++ SUPPORT_END=2024-05-14 00:00:36.155 ++ VARIANT='Cloud Edition' 00:00:36.155 ++ VARIANT_ID=cloud 00:00:36.155 + uname -a 00:00:36.155 Linux spdk-wfp-43 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.156 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:39.443 Hugepages 00:00:39.443 node hugesize free / total 00:00:39.443 node0 1048576kB 0 / 0 00:00:39.443 node0 2048kB 0 / 0 00:00:39.443 node1 1048576kB 0 / 0 00:00:39.443 node1 2048kB 0 / 0 00:00:39.443 00:00:39.443 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:39.443 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:39.443 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:39.443 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:39.443 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:39.443 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:39.443 + rm -f /tmp/spdk-ld-path 00:00:39.443 + source autorun-spdk.conf 00:00:39.444 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.444 ++ SPDK_TEST_NVMF=1 00:00:39.444 ++ SPDK_TEST_NVME_CLI=1 00:00:39.444 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:39.444 ++ SPDK_RUN_UBSAN=1 00:00:39.444 ++ NET_TYPE=phy 00:00:39.444 ++ RUN_NIGHTLY=0 00:00:39.444 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:39.444 + [[ -n '' ]] 00:00:39.444 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:39.444 + for M in /var/spdk/build-*-manifest.txt 00:00:39.444 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:39.444 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:39.444 + for M in /var/spdk/build-*-manifest.txt 00:00:39.444 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:39.444 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:39.444 ++ uname 00:00:39.444 + [[ Linux == \L\i\n\u\x ]] 00:00:39.444 + sudo dmesg -T 00:00:39.444 + sudo dmesg --clear 00:00:39.444 + dmesg_pid=2240600 00:00:39.444 + [[ Fedora Linux == FreeBSD ]] 00:00:39.444 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.444 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.444 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.444 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.444 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.444 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.444 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.444 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.444 + sudo dmesg -Tw 00:00:39.444 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.444 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.444 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.444 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.444 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.444 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.444 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.444 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.444 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:39.444 Test configuration: 00:00:39.444 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.444 SPDK_TEST_NVMF=1 00:00:39.444 SPDK_TEST_NVME_CLI=1 00:00:39.444 SPDK_TEST_NVMF_NICS=mlx5 00:00:39.444 SPDK_RUN_UBSAN=1 00:00:39.444 NET_TYPE=phy 00:00:39.444 RUN_NIGHTLY=0 13:31:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:39.444 13:31:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:39.444 13:31:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:39.444 13:31:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:39.444 13:31:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.444 13:31:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.444 13:31:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.444 13:31:05 -- paths/export.sh@5 -- $ export PATH 00:00:39.444 13:31:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.444 13:31:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:39.444 13:31:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:39.444 13:31:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721043065.XXXXXX 00:00:39.444 13:31:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721043065.szYFvx 00:00:39.444 13:31:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:39.444 13:31:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:39.444 13:31:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:39.444 13:31:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:39.444 13:31:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:39.444 13:31:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:39.444 13:31:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:39.444 13:31:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.444 13:31:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:39.444 13:31:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:39.444 13:31:05 -- pm/common@17 -- $ local monitor 00:00:39.444 13:31:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.444 13:31:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.444 13:31:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.444 13:31:05 -- pm/common@21 -- $ date +%s 00:00:39.444 13:31:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.444 13:31:05 -- pm/common@21 -- $ date +%s 00:00:39.444 13:31:05 -- pm/common@25 -- $ sleep 1 00:00:39.444 13:31:05 -- pm/common@21 -- $ date +%s 00:00:39.444 13:31:05 -- pm/common@21 -- $ date +%s 00:00:39.444 13:31:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043065 00:00:39.444 13:31:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043065 00:00:39.444 13:31:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043065 00:00:39.444 13:31:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043065 00:00:39.444 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043065_collect-vmstat.pm.log 00:00:39.444 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043065_collect-cpu-load.pm.log 00:00:39.444 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043065_collect-cpu-temp.pm.log 00:00:39.444 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043065_collect-bmc-pm.bmc.pm.log 00:00:40.379 13:31:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:40.379 13:31:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:40.379 13:31:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:40.379 13:31:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:40.380 13:31:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:40.380 Mon Jul 15 11:31:06 AM UTC 2024 00:00:40.380 13:31:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:40.380 v24.09-pre-206-g2728651ee 00:00:40.380 13:31:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:40.380 13:31:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.380 13:31:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.380 13:31:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:40.380 13:31:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:40.380 13:31:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.380 ************************************ 00:00:40.380 START TEST ubsan 00:00:40.380 ************************************ 00:00:40.380 13:31:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:40.380 using ubsan 00:00:40.380 00:00:40.380 real 0m0.001s 00:00:40.380 user 0m0.001s 00:00:40.380 sys 0m0.000s 00:00:40.380 13:31:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:40.380 13:31:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:40.380 ************************************ 00:00:40.380 END TEST ubsan 00:00:40.380 ************************************ 00:00:40.640 13:31:06 -- common/autotest_common.sh@1142 -- $ return 0 00:00:40.640 13:31:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:40.640 13:31:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:40.640 13:31:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:40.640 13:31:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:40.640 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:40.640 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:40.900 Using 'verbs' RDMA provider 00:00:56.732 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.008 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.267 Creating mk/config.mk...done. 00:01:09.267 Creating mk/cc.flags.mk...done. 00:01:09.267 Type 'make' to build. 00:01:09.267 13:31:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:09.267 13:31:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.267 13:31:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.267 13:31:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.267 ************************************ 00:01:09.267 START TEST make 00:01:09.267 ************************************ 00:01:09.267 13:31:35 make -- common/autotest_common.sh@1123 -- $ make -j72 00:01:09.835 make[1]: Nothing to be done for 'all'. 00:01:19.826 The Meson build system 00:01:19.826 Version: 1.3.1 00:01:19.826 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:19.826 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:19.826 Build type: native build 00:01:19.826 Program cat found: YES (/usr/bin/cat) 00:01:19.826 Project name: DPDK 00:01:19.826 Project version: 24.03.0 00:01:19.826 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.826 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.826 Host machine cpu family: x86_64 00:01:19.826 Host machine cpu: x86_64 00:01:19.827 Message: ## Building in Developer Mode ## 00:01:19.827 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.827 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:19.827 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.827 Program python3 found: YES (/usr/bin/python3) 00:01:19.827 Program cat found: YES (/usr/bin/cat) 00:01:19.827 Compiler for C supports arguments -march=native: YES 00:01:19.827 Checking for size of "void *" : 8 00:01:19.827 Checking for size of "void *" : 8 (cached) 00:01:19.827 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:19.827 Library m found: YES 00:01:19.827 Library numa found: YES 00:01:19.827 Has header "numaif.h" : YES 00:01:19.827 Library fdt found: NO 00:01:19.827 Library execinfo found: NO 00:01:19.827 Has header "execinfo.h" : YES 00:01:19.827 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.827 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.827 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.827 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.827 Run-time dependency openssl found: YES 3.0.9 00:01:19.827 Run-time dependency libpcap found: YES 1.10.4 00:01:19.827 Has header "pcap.h" with dependency libpcap: YES 00:01:19.827 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.827 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.827 Compiler for C supports arguments -Wformat: YES 00:01:19.827 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.827 Compiler for C supports arguments -Wformat-security: NO 00:01:19.827 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.827 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.827 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.827 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.827 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.827 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.827 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.827 Compiler for C supports arguments -Wundef: YES 00:01:19.827 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.827 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.827 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.827 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.827 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.827 Program objdump found: YES (/usr/bin/objdump) 00:01:19.827 Compiler for C supports arguments -mavx512f: YES 00:01:19.827 Checking if "AVX512 checking" compiles: YES 00:01:19.827 Fetching value of define "__SSE4_2__" : 1 00:01:19.827 Fetching value of define "__AES__" : 1 00:01:19.827 Fetching value of define "__AVX__" : 1 00:01:19.827 Fetching value of define "__AVX2__" : 1 00:01:19.827 Fetching value of define "__AVX512BW__" : 1 00:01:19.827 Fetching value of define "__AVX512CD__" : 1 00:01:19.827 Fetching value of define "__AVX512DQ__" : 1 00:01:19.827 Fetching value of define "__AVX512F__" : 1 00:01:19.827 Fetching value of define "__AVX512VL__" : 1 00:01:19.827 Fetching value of define "__PCLMUL__" : 1 00:01:19.827 Fetching value of define "__RDRND__" : 1 00:01:19.827 Fetching value of define "__RDSEED__" : 1 00:01:19.827 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.827 Fetching value of define "__znver1__" : (undefined) 00:01:19.827 Fetching value of define "__znver2__" : (undefined) 00:01:19.827 Fetching value of define "__znver3__" : (undefined) 00:01:19.827 Fetching value of define "__znver4__" : (undefined) 00:01:19.827 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.827 Message: lib/log: Defining dependency "log" 00:01:19.827 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.827 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.827 Checking for function "getentropy" : NO 00:01:19.827 Message: lib/eal: Defining dependency "eal" 00:01:19.827 Message: lib/ring: Defining dependency "ring" 00:01:19.827 Message: lib/rcu: Defining dependency "rcu" 00:01:19.827 Message: lib/mempool: Defining dependency "mempool" 00:01:19.827 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.827 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.827 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:19.827 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:19.827 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:19.827 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:19.827 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:19.827 Compiler for C supports arguments -mpclmul: YES 00:01:19.827 Compiler for C supports arguments -maes: YES 00:01:19.827 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.827 Compiler for C supports arguments -mavx512bw: YES 00:01:19.827 Compiler for C supports arguments -mavx512dq: YES 00:01:19.827 Compiler for C supports arguments -mavx512vl: YES 00:01:19.827 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.827 Compiler for C supports arguments -mavx2: YES 00:01:19.827 Compiler for C supports arguments -mavx: YES 00:01:19.827 Message: lib/net: Defining dependency "net" 00:01:19.827 Message: lib/meter: Defining dependency "meter" 00:01:19.827 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.827 Message: lib/pci: Defining dependency "pci" 00:01:19.827 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.827 Message: lib/hash: Defining dependency "hash" 00:01:19.827 Message: lib/timer: Defining dependency "timer" 00:01:19.827 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.827 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.827 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.827 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.827 Message: lib/power: Defining dependency "power" 00:01:19.827 Message: lib/reorder: Defining dependency "reorder" 00:01:19.827 Message: lib/security: Defining dependency "security" 00:01:19.827 Has header "linux/userfaultfd.h" : YES 00:01:19.827 Has header "linux/vduse.h" : YES 00:01:19.827 Message: lib/vhost: Defining dependency "vhost" 00:01:19.827 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.827 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.827 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.827 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.827 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:19.827 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:19.827 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:19.827 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:19.827 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:19.827 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:19.827 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.827 Configuring doxy-api-html.conf using configuration 00:01:19.827 Configuring doxy-api-man.conf using configuration 00:01:19.827 Program mandb found: YES (/usr/bin/mandb) 00:01:19.827 Program sphinx-build found: NO 00:01:19.827 Configuring rte_build_config.h using configuration 00:01:19.827 Message: 00:01:19.827 ================= 00:01:19.827 Applications Enabled 00:01:19.827 ================= 00:01:19.827 00:01:19.827 apps: 00:01:19.827 00:01:19.827 00:01:19.827 Message: 00:01:19.827 ================= 00:01:19.827 Libraries Enabled 00:01:19.827 ================= 00:01:19.827 00:01:19.827 libs: 00:01:19.827 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:19.827 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:19.827 cryptodev, dmadev, power, reorder, security, vhost, 00:01:19.827 00:01:19.827 Message: 00:01:19.827 =============== 00:01:19.827 Drivers Enabled 00:01:19.827 =============== 00:01:19.827 00:01:19.827 common: 00:01:19.827 00:01:19.827 bus: 00:01:19.827 pci, vdev, 00:01:19.827 mempool: 00:01:19.827 ring, 00:01:19.827 dma: 00:01:19.827 00:01:19.827 net: 00:01:19.827 00:01:19.827 crypto: 00:01:19.827 00:01:19.827 compress: 00:01:19.827 00:01:19.827 vdpa: 00:01:19.827 00:01:19.827 00:01:19.827 Message: 00:01:19.827 ================= 00:01:19.827 Content Skipped 00:01:19.827 ================= 00:01:19.827 00:01:19.827 apps: 00:01:19.827 dumpcap: explicitly disabled via build config 00:01:19.827 graph: explicitly disabled via build config 00:01:19.827 pdump: explicitly disabled via build config 00:01:19.827 proc-info: explicitly disabled via build config 00:01:19.827 test-acl: explicitly disabled via build config 00:01:19.827 test-bbdev: explicitly disabled via build config 00:01:19.827 test-cmdline: explicitly disabled via build config 00:01:19.827 test-compress-perf: explicitly disabled via build config 00:01:19.827 test-crypto-perf: explicitly disabled via build config 00:01:19.827 test-dma-perf: explicitly disabled via build config 00:01:19.827 test-eventdev: explicitly disabled via build config 00:01:19.828 test-fib: explicitly disabled via build config 00:01:19.828 test-flow-perf: explicitly disabled via build config 00:01:19.828 test-gpudev: explicitly disabled via build config 00:01:19.828 test-mldev: explicitly disabled via build config 00:01:19.828 test-pipeline: explicitly disabled via build config 00:01:19.828 test-pmd: explicitly disabled via build config 00:01:19.828 test-regex: explicitly disabled via build config 00:01:19.828 test-sad: explicitly disabled via build config 00:01:19.828 test-security-perf: explicitly disabled via build config 00:01:19.828 00:01:19.828 libs: 00:01:19.828 argparse: explicitly disabled via build config 00:01:19.828 metrics: explicitly disabled via build config 00:01:19.828 acl: explicitly disabled via build config 00:01:19.828 bbdev: explicitly disabled via build config 00:01:19.828 bitratestats: explicitly disabled via build config 00:01:19.828 bpf: explicitly disabled via build config 00:01:19.828 cfgfile: explicitly disabled via build config 00:01:19.828 distributor: explicitly disabled via build config 00:01:19.828 efd: explicitly disabled via build config 00:01:19.828 eventdev: explicitly disabled via build config 00:01:19.828 dispatcher: explicitly disabled via build config 00:01:19.828 gpudev: explicitly disabled via build config 00:01:19.828 gro: explicitly disabled via build config 00:01:19.828 gso: explicitly disabled via build config 00:01:19.828 ip_frag: explicitly disabled via build config 00:01:19.828 jobstats: explicitly disabled via build config 00:01:19.828 latencystats: explicitly disabled via build config 00:01:19.828 lpm: explicitly disabled via build config 00:01:19.828 member: explicitly disabled via build config 00:01:19.828 pcapng: explicitly disabled via build config 00:01:19.828 rawdev: explicitly disabled via build config 00:01:19.828 regexdev: explicitly disabled via build config 00:01:19.828 mldev: explicitly disabled via build config 00:01:19.828 rib: explicitly disabled via build config 00:01:19.828 sched: explicitly disabled via build config 00:01:19.828 stack: explicitly disabled via build config 00:01:19.828 ipsec: explicitly disabled via build config 00:01:19.828 pdcp: explicitly disabled via build config 00:01:19.828 fib: explicitly disabled via build config 00:01:19.828 port: explicitly disabled via build config 00:01:19.828 pdump: explicitly disabled via build config 00:01:19.828 table: explicitly disabled via build config 00:01:19.828 pipeline: explicitly disabled via build config 00:01:19.828 graph: explicitly disabled via build config 00:01:19.828 node: explicitly disabled via build config 00:01:19.828 00:01:19.828 drivers: 00:01:19.828 common/cpt: not in enabled drivers build config 00:01:19.828 common/dpaax: not in enabled drivers build config 00:01:19.828 common/iavf: not in enabled drivers build config 00:01:19.828 common/idpf: not in enabled drivers build config 00:01:19.828 common/ionic: not in enabled drivers build config 00:01:19.828 common/mvep: not in enabled drivers build config 00:01:19.828 common/octeontx: not in enabled drivers build config 00:01:19.828 bus/auxiliary: not in enabled drivers build config 00:01:19.828 bus/cdx: not in enabled drivers build config 00:01:19.828 bus/dpaa: not in enabled drivers build config 00:01:19.828 bus/fslmc: not in enabled drivers build config 00:01:19.828 bus/ifpga: not in enabled drivers build config 00:01:19.828 bus/platform: not in enabled drivers build config 00:01:19.828 bus/uacce: not in enabled drivers build config 00:01:19.828 bus/vmbus: not in enabled drivers build config 00:01:19.828 common/cnxk: not in enabled drivers build config 00:01:19.828 common/mlx5: not in enabled drivers build config 00:01:19.828 common/nfp: not in enabled drivers build config 00:01:19.828 common/nitrox: not in enabled drivers build config 00:01:19.828 common/qat: not in enabled drivers build config 00:01:19.828 common/sfc_efx: not in enabled drivers build config 00:01:19.828 mempool/bucket: not in enabled drivers build config 00:01:19.828 mempool/cnxk: not in enabled drivers build config 00:01:19.828 mempool/dpaa: not in enabled drivers build config 00:01:19.828 mempool/dpaa2: not in enabled drivers build config 00:01:19.828 mempool/octeontx: not in enabled drivers build config 00:01:19.828 mempool/stack: not in enabled drivers build config 00:01:19.828 dma/cnxk: not in enabled drivers build config 00:01:19.828 dma/dpaa: not in enabled drivers build config 00:01:19.828 dma/dpaa2: not in enabled drivers build config 00:01:19.828 dma/hisilicon: not in enabled drivers build config 00:01:19.828 dma/idxd: not in enabled drivers build config 00:01:19.828 dma/ioat: not in enabled drivers build config 00:01:19.828 dma/skeleton: not in enabled drivers build config 00:01:19.828 net/af_packet: not in enabled drivers build config 00:01:19.828 net/af_xdp: not in enabled drivers build config 00:01:19.828 net/ark: not in enabled drivers build config 00:01:19.828 net/atlantic: not in enabled drivers build config 00:01:19.828 net/avp: not in enabled drivers build config 00:01:19.828 net/axgbe: not in enabled drivers build config 00:01:19.828 net/bnx2x: not in enabled drivers build config 00:01:19.828 net/bnxt: not in enabled drivers build config 00:01:19.828 net/bonding: not in enabled drivers build config 00:01:19.828 net/cnxk: not in enabled drivers build config 00:01:19.828 net/cpfl: not in enabled drivers build config 00:01:19.828 net/cxgbe: not in enabled drivers build config 00:01:19.828 net/dpaa: not in enabled drivers build config 00:01:19.828 net/dpaa2: not in enabled drivers build config 00:01:19.828 net/e1000: not in enabled drivers build config 00:01:19.828 net/ena: not in enabled drivers build config 00:01:19.828 net/enetc: not in enabled drivers build config 00:01:19.828 net/enetfec: not in enabled drivers build config 00:01:19.828 net/enic: not in enabled drivers build config 00:01:19.828 net/failsafe: not in enabled drivers build config 00:01:19.828 net/fm10k: not in enabled drivers build config 00:01:19.828 net/gve: not in enabled drivers build config 00:01:19.828 net/hinic: not in enabled drivers build config 00:01:19.828 net/hns3: not in enabled drivers build config 00:01:19.828 net/i40e: not in enabled drivers build config 00:01:19.828 net/iavf: not in enabled drivers build config 00:01:19.828 net/ice: not in enabled drivers build config 00:01:19.828 net/idpf: not in enabled drivers build config 00:01:19.828 net/igc: not in enabled drivers build config 00:01:19.828 net/ionic: not in enabled drivers build config 00:01:19.828 net/ipn3ke: not in enabled drivers build config 00:01:19.828 net/ixgbe: not in enabled drivers build config 00:01:19.828 net/mana: not in enabled drivers build config 00:01:19.828 net/memif: not in enabled drivers build config 00:01:19.828 net/mlx4: not in enabled drivers build config 00:01:19.828 net/mlx5: not in enabled drivers build config 00:01:19.828 net/mvneta: not in enabled drivers build config 00:01:19.828 net/mvpp2: not in enabled drivers build config 00:01:19.828 net/netvsc: not in enabled drivers build config 00:01:19.828 net/nfb: not in enabled drivers build config 00:01:19.828 net/nfp: not in enabled drivers build config 00:01:19.828 net/ngbe: not in enabled drivers build config 00:01:19.828 net/null: not in enabled drivers build config 00:01:19.828 net/octeontx: not in enabled drivers build config 00:01:19.828 net/octeon_ep: not in enabled drivers build config 00:01:19.828 net/pcap: not in enabled drivers build config 00:01:19.828 net/pfe: not in enabled drivers build config 00:01:19.828 net/qede: not in enabled drivers build config 00:01:19.828 net/ring: not in enabled drivers build config 00:01:19.828 net/sfc: not in enabled drivers build config 00:01:19.828 net/softnic: not in enabled drivers build config 00:01:19.828 net/tap: not in enabled drivers build config 00:01:19.828 net/thunderx: not in enabled drivers build config 00:01:19.828 net/txgbe: not in enabled drivers build config 00:01:19.828 net/vdev_netvsc: not in enabled drivers build config 00:01:19.828 net/vhost: not in enabled drivers build config 00:01:19.828 net/virtio: not in enabled drivers build config 00:01:19.828 net/vmxnet3: not in enabled drivers build config 00:01:19.828 raw/*: missing internal dependency, "rawdev" 00:01:19.828 crypto/armv8: not in enabled drivers build config 00:01:19.828 crypto/bcmfs: not in enabled drivers build config 00:01:19.828 crypto/caam_jr: not in enabled drivers build config 00:01:19.828 crypto/ccp: not in enabled drivers build config 00:01:19.828 crypto/cnxk: not in enabled drivers build config 00:01:19.828 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.828 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.828 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.828 crypto/mlx5: not in enabled drivers build config 00:01:19.828 crypto/mvsam: not in enabled drivers build config 00:01:19.828 crypto/nitrox: not in enabled drivers build config 00:01:19.828 crypto/null: not in enabled drivers build config 00:01:19.828 crypto/octeontx: not in enabled drivers build config 00:01:19.828 crypto/openssl: not in enabled drivers build config 00:01:19.828 crypto/scheduler: not in enabled drivers build config 00:01:19.829 crypto/uadk: not in enabled drivers build config 00:01:19.829 crypto/virtio: not in enabled drivers build config 00:01:19.829 compress/isal: not in enabled drivers build config 00:01:19.829 compress/mlx5: not in enabled drivers build config 00:01:19.829 compress/nitrox: not in enabled drivers build config 00:01:19.829 compress/octeontx: not in enabled drivers build config 00:01:19.829 compress/zlib: not in enabled drivers build config 00:01:19.829 regex/*: missing internal dependency, "regexdev" 00:01:19.829 ml/*: missing internal dependency, "mldev" 00:01:19.829 vdpa/ifc: not in enabled drivers build config 00:01:19.829 vdpa/mlx5: not in enabled drivers build config 00:01:19.829 vdpa/nfp: not in enabled drivers build config 00:01:19.829 vdpa/sfc: not in enabled drivers build config 00:01:19.829 event/*: missing internal dependency, "eventdev" 00:01:19.829 baseband/*: missing internal dependency, "bbdev" 00:01:19.829 gpu/*: missing internal dependency, "gpudev" 00:01:19.829 00:01:19.829 00:01:19.829 Build targets in project: 85 00:01:19.829 00:01:19.829 DPDK 24.03.0 00:01:19.829 00:01:19.829 User defined options 00:01:19.829 buildtype : debug 00:01:19.829 default_library : shared 00:01:19.829 libdir : lib 00:01:19.829 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:19.829 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:19.829 c_link_args : 00:01:19.829 cpu_instruction_set: native 00:01:19.829 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:19.829 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:19.829 enable_docs : false 00:01:19.829 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:19.829 enable_kmods : false 00:01:19.829 max_lcores : 128 00:01:19.829 tests : false 00:01:19.829 00:01:19.829 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:19.829 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:19.829 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:19.829 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:19.829 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:19.829 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:19.829 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:19.829 [6/268] Linking static target lib/librte_kvargs.a 00:01:19.829 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:19.829 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:19.829 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:19.829 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:19.829 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:19.829 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:19.829 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:19.829 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.829 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.829 [16/268] Linking static target lib/librte_log.a 00:01:19.829 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:19.829 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.829 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:19.829 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.829 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.829 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:19.829 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.829 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.829 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.829 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.829 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.829 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.829 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.829 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.829 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:19.829 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.829 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.829 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.829 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.829 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.829 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.829 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.829 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:19.829 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.829 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.829 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.829 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.829 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:19.829 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.829 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:19.829 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.829 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.829 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.829 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:19.829 [51/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.829 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.829 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.829 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:19.829 [55/268] Linking static target lib/librte_telemetry.a 00:01:19.829 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.829 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.829 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.829 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.829 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.829 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.829 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.829 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.829 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.829 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.829 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:19.829 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.829 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.829 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.829 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.829 [71/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:19.829 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:19.829 [73/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:19.829 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:19.829 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:19.829 [76/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:19.829 [77/268] Linking static target lib/librte_ring.a 00:01:19.829 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.829 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:19.829 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:19.829 [81/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:19.829 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.829 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.829 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.829 [85/268] Linking static target lib/librte_pci.a 00:01:19.829 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:19.829 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:19.829 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:19.829 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:19.829 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:19.829 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:19.829 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.830 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.830 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.830 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:19.830 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.830 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.830 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.830 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.830 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.830 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.830 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.830 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:19.830 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.830 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:19.830 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:19.830 [107/268] Linking static target lib/librte_rcu.a 00:01:19.830 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:19.830 [109/268] Linking static target lib/librte_mempool.a 00:01:19.830 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:19.830 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:19.830 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:19.830 [113/268] Linking static target lib/librte_eal.a 00:01:19.830 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:20.090 [115/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.090 [116/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.090 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.090 [118/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.091 [119/268] Linking static target lib/librte_net.a 00:01:20.091 [120/268] Linking static target lib/librte_meter.a 00:01:20.091 [121/268] Linking target lib/librte_log.so.24.1 00:01:20.091 [122/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:20.091 [123/268] Linking static target lib/librte_mbuf.a 00:01:20.091 [124/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.091 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:20.091 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:20.091 [127/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:20.091 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:20.091 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:20.091 [130/268] Linking static target lib/librte_cmdline.a 00:01:20.091 [131/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.091 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.091 [133/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:20.091 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:20.091 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.351 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.351 [137/268] Linking static target lib/librte_timer.a 00:01:20.351 [138/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.351 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:20.351 [140/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:20.351 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:20.351 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:20.351 [143/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:20.351 [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:20.351 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.351 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.351 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.351 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.351 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.351 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:20.351 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:20.351 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:20.351 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:20.351 [154/268] Linking target lib/librte_kvargs.so.24.1 00:01:20.351 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.351 [156/268] Linking target lib/librte_telemetry.so.24.1 00:01:20.351 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.351 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.351 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:20.351 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:20.351 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:20.351 [162/268] Linking static target lib/librte_dmadev.a 00:01:20.351 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:20.351 [164/268] Linking static target lib/librte_compressdev.a 00:01:20.351 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:20.351 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:20.351 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:20.351 [168/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.351 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:20.351 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:20.351 [171/268] Linking static target lib/librte_reorder.a 00:01:20.351 [172/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.351 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:20.351 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.351 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:20.351 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:20.351 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:20.351 [178/268] Linking static target lib/librte_power.a 00:01:20.351 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:20.351 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:20.351 [181/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:20.351 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:20.351 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:20.351 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:20.351 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:20.351 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:20.351 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:20.351 [188/268] Linking static target lib/librte_security.a 00:01:20.351 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:20.351 [190/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:20.351 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:20.351 [192/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:20.610 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.610 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:20.610 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:20.610 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.610 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.610 [198/268] Linking static target drivers/librte_bus_vdev.a 00:01:20.610 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:20.610 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.610 [201/268] Linking static target lib/librte_hash.a 00:01:20.610 [202/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.610 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.610 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.610 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.610 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:20.610 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:20.610 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.610 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.610 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.610 [211/268] Linking static target lib/librte_cryptodev.a 00:01:20.610 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.610 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:20.610 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:20.869 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.869 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.869 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.129 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.129 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.129 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.129 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:21.129 [222/268] Linking static target lib/librte_ethdev.a 00:01:21.129 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:21.389 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.649 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.649 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.649 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.217 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:22.217 [229/268] Linking static target lib/librte_vhost.a 00:01:23.155 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.534 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.657 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.226 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.226 [234/268] Linking target lib/librte_eal.so.24.1 00:01:33.485 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:33.485 [236/268] Linking target lib/librte_ring.so.24.1 00:01:33.485 [237/268] Linking target lib/librte_meter.so.24.1 00:01:33.485 [238/268] Linking target lib/librte_timer.so.24.1 00:01:33.485 [239/268] Linking target lib/librte_pci.so.24.1 00:01:33.485 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:33.485 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:33.485 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:33.485 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:33.744 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:33.744 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:33.744 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:33.745 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:33.745 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:33.745 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:33.745 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:33.745 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:33.745 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:33.745 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:34.003 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:34.003 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:34.003 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:34.003 [257/268] Linking target lib/librte_net.so.24.1 00:01:34.003 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:34.261 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:34.261 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:34.261 [261/268] Linking target lib/librte_hash.so.24.1 00:01:34.261 [262/268] Linking target lib/librte_security.so.24.1 00:01:34.261 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:34.261 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:34.519 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:34.519 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:34.519 [267/268] Linking target lib/librte_power.so.24.1 00:01:34.519 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:34.519 INFO: autodetecting backend as ninja 00:01:34.519 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:35.900 CC lib/ut_mock/mock.o 00:01:35.900 CC lib/log/log.o 00:01:35.900 CC lib/log/log_flags.o 00:01:35.900 CC lib/log/log_deprecated.o 00:01:35.900 CC lib/ut/ut.o 00:01:35.900 LIB libspdk_ut_mock.a 00:01:35.900 LIB libspdk_ut.a 00:01:35.900 LIB libspdk_log.a 00:01:35.900 SO libspdk_ut.so.2.0 00:01:35.900 SO libspdk_ut_mock.so.6.0 00:01:35.900 SO libspdk_log.so.7.0 00:01:35.900 SYMLINK libspdk_ut_mock.so 00:01:35.900 SYMLINK libspdk_ut.so 00:01:35.900 SYMLINK libspdk_log.so 00:01:36.468 CC lib/util/base64.o 00:01:36.468 CXX lib/trace_parser/trace.o 00:01:36.468 CC lib/util/bit_array.o 00:01:36.468 CC lib/util/cpuset.o 00:01:36.468 CC lib/util/crc16.o 00:01:36.468 CC lib/util/crc32.o 00:01:36.468 CC lib/dma/dma.o 00:01:36.468 CC lib/util/crc32c.o 00:01:36.468 CC lib/util/crc32_ieee.o 00:01:36.468 CC lib/util/crc64.o 00:01:36.468 CC lib/util/dif.o 00:01:36.468 CC lib/util/fd.o 00:01:36.468 CC lib/ioat/ioat.o 00:01:36.468 CC lib/util/file.o 00:01:36.468 CC lib/util/hexlify.o 00:01:36.468 CC lib/util/iov.o 00:01:36.468 CC lib/util/math.o 00:01:36.468 CC lib/util/string.o 00:01:36.468 CC lib/util/pipe.o 00:01:36.468 CC lib/util/strerror_tls.o 00:01:36.468 CC lib/util/uuid.o 00:01:36.468 CC lib/util/fd_group.o 00:01:36.468 CC lib/util/xor.o 00:01:36.468 CC lib/util/zipf.o 00:01:36.468 CC lib/vfio_user/host/vfio_user.o 00:01:36.468 CC lib/vfio_user/host/vfio_user_pci.o 00:01:36.468 LIB libspdk_dma.a 00:01:36.468 SO libspdk_dma.so.4.0 00:01:36.468 LIB libspdk_ioat.a 00:01:36.727 SO libspdk_ioat.so.7.0 00:01:36.727 SYMLINK libspdk_dma.so 00:01:36.727 SYMLINK libspdk_ioat.so 00:01:36.727 LIB libspdk_vfio_user.a 00:01:36.727 SO libspdk_vfio_user.so.5.0 00:01:36.727 LIB libspdk_util.a 00:01:36.727 SYMLINK libspdk_vfio_user.so 00:01:36.727 SO libspdk_util.so.9.1 00:01:36.985 SYMLINK libspdk_util.so 00:01:36.985 LIB libspdk_trace_parser.a 00:01:36.985 SO libspdk_trace_parser.so.5.0 00:01:37.244 SYMLINK libspdk_trace_parser.so 00:01:37.244 CC lib/env_dpdk/env.o 00:01:37.244 CC lib/env_dpdk/pci.o 00:01:37.244 CC lib/env_dpdk/memory.o 00:01:37.244 CC lib/env_dpdk/init.o 00:01:37.244 CC lib/env_dpdk/threads.o 00:01:37.244 CC lib/rdma_provider/common.o 00:01:37.244 CC lib/json/json_util.o 00:01:37.244 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:37.244 CC lib/json/json_parse.o 00:01:37.244 CC lib/env_dpdk/pci_ioat.o 00:01:37.244 CC lib/rdma_utils/rdma_utils.o 00:01:37.244 CC lib/vmd/vmd.o 00:01:37.244 CC lib/env_dpdk/pci_virtio.o 00:01:37.244 CC lib/json/json_write.o 00:01:37.244 CC lib/vmd/led.o 00:01:37.244 CC lib/env_dpdk/pci_vmd.o 00:01:37.244 CC lib/env_dpdk/pci_idxd.o 00:01:37.244 CC lib/idxd/idxd.o 00:01:37.244 CC lib/env_dpdk/pci_event.o 00:01:37.244 CC lib/env_dpdk/sigbus_handler.o 00:01:37.244 CC lib/idxd/idxd_user.o 00:01:37.244 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:37.244 CC lib/idxd/idxd_kernel.o 00:01:37.244 CC lib/env_dpdk/pci_dpdk.o 00:01:37.244 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:37.244 CC lib/conf/conf.o 00:01:37.502 LIB libspdk_rdma_provider.a 00:01:37.502 SO libspdk_rdma_provider.so.6.0 00:01:37.502 LIB libspdk_conf.a 00:01:37.502 LIB libspdk_rdma_utils.a 00:01:37.502 LIB libspdk_json.a 00:01:37.502 SO libspdk_conf.so.6.0 00:01:37.760 SYMLINK libspdk_rdma_provider.so 00:01:37.760 SO libspdk_rdma_utils.so.1.0 00:01:37.760 SO libspdk_json.so.6.0 00:01:37.760 SYMLINK libspdk_conf.so 00:01:37.760 SYMLINK libspdk_rdma_utils.so 00:01:37.760 SYMLINK libspdk_json.so 00:01:37.760 LIB libspdk_idxd.a 00:01:37.760 SO libspdk_idxd.so.12.0 00:01:37.760 LIB libspdk_vmd.a 00:01:38.019 SO libspdk_vmd.so.6.0 00:01:38.019 SYMLINK libspdk_idxd.so 00:01:38.019 SYMLINK libspdk_vmd.so 00:01:38.019 CC lib/jsonrpc/jsonrpc_server.o 00:01:38.019 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:38.019 CC lib/jsonrpc/jsonrpc_client.o 00:01:38.019 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:38.278 LIB libspdk_jsonrpc.a 00:01:38.278 SO libspdk_jsonrpc.so.6.0 00:01:38.278 LIB libspdk_env_dpdk.a 00:01:38.537 SYMLINK libspdk_jsonrpc.so 00:01:38.537 SO libspdk_env_dpdk.so.14.1 00:01:38.537 SYMLINK libspdk_env_dpdk.so 00:01:38.796 CC lib/rpc/rpc.o 00:01:39.054 LIB libspdk_rpc.a 00:01:39.054 SO libspdk_rpc.so.6.0 00:01:39.054 SYMLINK libspdk_rpc.so 00:01:39.622 CC lib/trace/trace.o 00:01:39.622 CC lib/trace/trace_flags.o 00:01:39.622 CC lib/trace/trace_rpc.o 00:01:39.622 CC lib/notify/notify.o 00:01:39.622 CC lib/notify/notify_rpc.o 00:01:39.622 CC lib/keyring/keyring.o 00:01:39.622 CC lib/keyring/keyring_rpc.o 00:01:39.622 LIB libspdk_notify.a 00:01:39.622 SO libspdk_notify.so.6.0 00:01:39.622 LIB libspdk_keyring.a 00:01:39.622 LIB libspdk_trace.a 00:01:39.622 SO libspdk_keyring.so.1.0 00:01:39.881 SYMLINK libspdk_notify.so 00:01:39.881 SO libspdk_trace.so.10.0 00:01:39.881 SYMLINK libspdk_keyring.so 00:01:39.881 SYMLINK libspdk_trace.so 00:01:40.176 CC lib/thread/thread.o 00:01:40.176 CC lib/sock/sock.o 00:01:40.176 CC lib/sock/sock_rpc.o 00:01:40.176 CC lib/thread/iobuf.o 00:01:40.456 LIB libspdk_sock.a 00:01:40.457 SO libspdk_sock.so.10.0 00:01:40.715 SYMLINK libspdk_sock.so 00:01:40.973 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:40.973 CC lib/nvme/nvme_ctrlr.o 00:01:40.973 CC lib/nvme/nvme_fabric.o 00:01:40.973 CC lib/nvme/nvme_ns_cmd.o 00:01:40.973 CC lib/nvme/nvme_ns.o 00:01:40.973 CC lib/nvme/nvme_pcie_common.o 00:01:40.973 CC lib/nvme/nvme_pcie.o 00:01:40.973 CC lib/nvme/nvme_qpair.o 00:01:40.973 CC lib/nvme/nvme.o 00:01:40.973 CC lib/nvme/nvme_quirks.o 00:01:40.973 CC lib/nvme/nvme_transport.o 00:01:40.973 CC lib/nvme/nvme_discovery.o 00:01:40.973 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:40.973 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:40.973 CC lib/nvme/nvme_tcp.o 00:01:40.973 CC lib/nvme/nvme_opal.o 00:01:40.973 CC lib/nvme/nvme_io_msg.o 00:01:40.973 CC lib/nvme/nvme_poll_group.o 00:01:40.973 CC lib/nvme/nvme_zns.o 00:01:40.973 CC lib/nvme/nvme_stubs.o 00:01:40.973 CC lib/nvme/nvme_auth.o 00:01:40.973 CC lib/nvme/nvme_cuse.o 00:01:40.973 CC lib/nvme/nvme_rdma.o 00:01:41.232 LIB libspdk_thread.a 00:01:41.232 SO libspdk_thread.so.10.1 00:01:41.489 SYMLINK libspdk_thread.so 00:01:41.746 CC lib/init/json_config.o 00:01:41.746 CC lib/init/subsystem_rpc.o 00:01:41.746 CC lib/accel/accel.o 00:01:41.746 CC lib/init/subsystem.o 00:01:41.746 CC lib/accel/accel_rpc.o 00:01:41.746 CC lib/init/rpc.o 00:01:41.746 CC lib/accel/accel_sw.o 00:01:41.746 CC lib/blob/blobstore.o 00:01:41.746 CC lib/blob/request.o 00:01:41.746 CC lib/blob/zeroes.o 00:01:41.746 CC lib/blob/blob_bs_dev.o 00:01:41.746 CC lib/virtio/virtio.o 00:01:41.746 CC lib/virtio/virtio_vhost_user.o 00:01:41.746 CC lib/virtio/virtio_pci.o 00:01:41.746 CC lib/virtio/virtio_vfio_user.o 00:01:42.003 LIB libspdk_init.a 00:01:42.003 SO libspdk_init.so.5.0 00:01:42.003 LIB libspdk_virtio.a 00:01:42.003 SO libspdk_virtio.so.7.0 00:01:42.003 SYMLINK libspdk_init.so 00:01:42.261 SYMLINK libspdk_virtio.so 00:01:42.518 CC lib/event/app.o 00:01:42.518 CC lib/event/reactor.o 00:01:42.518 CC lib/event/log_rpc.o 00:01:42.518 CC lib/event/app_rpc.o 00:01:42.518 CC lib/event/scheduler_static.o 00:01:42.518 LIB libspdk_accel.a 00:01:42.518 SO libspdk_accel.so.15.1 00:01:42.776 SYMLINK libspdk_accel.so 00:01:42.776 LIB libspdk_nvme.a 00:01:42.776 SO libspdk_nvme.so.13.1 00:01:42.776 LIB libspdk_event.a 00:01:42.776 SO libspdk_event.so.14.0 00:01:43.034 SYMLINK libspdk_event.so 00:01:43.034 CC lib/bdev/bdev.o 00:01:43.034 CC lib/bdev/bdev_rpc.o 00:01:43.034 CC lib/bdev/bdev_zone.o 00:01:43.034 CC lib/bdev/part.o 00:01:43.034 CC lib/bdev/scsi_nvme.o 00:01:43.034 SYMLINK libspdk_nvme.so 00:01:43.969 LIB libspdk_blob.a 00:01:43.969 SO libspdk_blob.so.11.0 00:01:43.969 SYMLINK libspdk_blob.so 00:01:44.536 CC lib/blobfs/blobfs.o 00:01:44.536 CC lib/blobfs/tree.o 00:01:44.536 CC lib/lvol/lvol.o 00:01:44.794 LIB libspdk_bdev.a 00:01:44.794 SO libspdk_bdev.so.15.1 00:01:45.053 SYMLINK libspdk_bdev.so 00:01:45.053 LIB libspdk_blobfs.a 00:01:45.053 SO libspdk_blobfs.so.10.0 00:01:45.053 LIB libspdk_lvol.a 00:01:45.053 SYMLINK libspdk_blobfs.so 00:01:45.053 SO libspdk_lvol.so.10.0 00:01:45.319 SYMLINK libspdk_lvol.so 00:01:45.319 CC lib/ftl/ftl_core.o 00:01:45.319 CC lib/ftl/ftl_init.o 00:01:45.319 CC lib/ftl/ftl_layout.o 00:01:45.319 CC lib/ftl/ftl_debug.o 00:01:45.319 CC lib/ublk/ublk.o 00:01:45.319 CC lib/ftl/ftl_io.o 00:01:45.319 CC lib/nbd/nbd.o 00:01:45.319 CC lib/ublk/ublk_rpc.o 00:01:45.319 CC lib/nvmf/ctrlr.o 00:01:45.319 CC lib/ftl/ftl_sb.o 00:01:45.319 CC lib/nbd/nbd_rpc.o 00:01:45.319 CC lib/nvmf/ctrlr_discovery.o 00:01:45.319 CC lib/ftl/ftl_l2p.o 00:01:45.319 CC lib/nvmf/ctrlr_bdev.o 00:01:45.319 CC lib/scsi/dev.o 00:01:45.319 CC lib/ftl/ftl_l2p_flat.o 00:01:45.319 CC lib/scsi/lun.o 00:01:45.319 CC lib/nvmf/subsystem.o 00:01:45.319 CC lib/ftl/ftl_nv_cache.o 00:01:45.319 CC lib/scsi/port.o 00:01:45.319 CC lib/nvmf/nvmf.o 00:01:45.319 CC lib/ftl/ftl_band.o 00:01:45.319 CC lib/ftl/ftl_band_ops.o 00:01:45.319 CC lib/scsi/scsi.o 00:01:45.319 CC lib/nvmf/nvmf_rpc.o 00:01:45.319 CC lib/scsi/scsi_bdev.o 00:01:45.319 CC lib/nvmf/transport.o 00:01:45.319 CC lib/scsi/scsi_pr.o 00:01:45.319 CC lib/nvmf/tcp.o 00:01:45.319 CC lib/ftl/ftl_writer.o 00:01:45.319 CC lib/scsi/scsi_rpc.o 00:01:45.319 CC lib/scsi/task.o 00:01:45.319 CC lib/nvmf/stubs.o 00:01:45.319 CC lib/ftl/ftl_rq.o 00:01:45.319 CC lib/nvmf/mdns_server.o 00:01:45.319 CC lib/ftl/ftl_reloc.o 00:01:45.319 CC lib/ftl/ftl_p2l.o 00:01:45.319 CC lib/nvmf/rdma.o 00:01:45.319 CC lib/nvmf/auth.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.319 CC lib/ftl/ftl_l2p_cache.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:45.319 CC lib/ftl/utils/ftl_conf.o 00:01:45.319 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:45.319 CC lib/ftl/utils/ftl_md.o 00:01:45.319 CC lib/ftl/utils/ftl_mempool.o 00:01:45.319 CC lib/ftl/utils/ftl_bitmap.o 00:01:45.319 CC lib/ftl/utils/ftl_property.o 00:01:45.319 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:45.319 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:45.319 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:45.319 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:45.319 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:45.319 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:45.319 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:45.319 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:45.319 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:45.319 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:45.319 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:45.319 CC lib/ftl/base/ftl_base_bdev.o 00:01:45.319 CC lib/ftl/base/ftl_base_dev.o 00:01:45.319 CC lib/ftl/ftl_trace.o 00:01:45.886 LIB libspdk_nbd.a 00:01:45.886 SO libspdk_nbd.so.7.0 00:01:46.145 LIB libspdk_scsi.a 00:01:46.145 SYMLINK libspdk_nbd.so 00:01:46.145 SO libspdk_scsi.so.9.0 00:01:46.145 LIB libspdk_ublk.a 00:01:46.145 SO libspdk_ublk.so.3.0 00:01:46.145 SYMLINK libspdk_scsi.so 00:01:46.145 SYMLINK libspdk_ublk.so 00:01:46.404 LIB libspdk_ftl.a 00:01:46.404 SO libspdk_ftl.so.9.0 00:01:46.404 CC lib/iscsi/conn.o 00:01:46.404 CC lib/iscsi/init_grp.o 00:01:46.404 CC lib/iscsi/iscsi.o 00:01:46.662 CC lib/iscsi/md5.o 00:01:46.662 CC lib/iscsi/param.o 00:01:46.662 CC lib/iscsi/portal_grp.o 00:01:46.662 CC lib/iscsi/tgt_node.o 00:01:46.662 CC lib/iscsi/iscsi_subsystem.o 00:01:46.662 CC lib/iscsi/iscsi_rpc.o 00:01:46.662 CC lib/iscsi/task.o 00:01:46.662 CC lib/vhost/vhost.o 00:01:46.662 CC lib/vhost/vhost_rpc.o 00:01:46.662 CC lib/vhost/rte_vhost_user.o 00:01:46.662 CC lib/vhost/vhost_scsi.o 00:01:46.662 CC lib/vhost/vhost_blk.o 00:01:46.919 SYMLINK libspdk_ftl.so 00:01:47.177 LIB libspdk_nvmf.a 00:01:47.177 SO libspdk_nvmf.so.18.1 00:01:47.437 LIB libspdk_vhost.a 00:01:47.437 SO libspdk_vhost.so.8.0 00:01:47.437 SYMLINK libspdk_nvmf.so 00:01:47.437 SYMLINK libspdk_vhost.so 00:01:47.437 LIB libspdk_iscsi.a 00:01:47.697 SO libspdk_iscsi.so.8.0 00:01:47.697 SYMLINK libspdk_iscsi.so 00:01:48.264 CC module/env_dpdk/env_dpdk_rpc.o 00:01:48.522 LIB libspdk_env_dpdk_rpc.a 00:01:48.522 CC module/keyring/file/keyring_rpc.o 00:01:48.522 CC module/keyring/file/keyring.o 00:01:48.522 CC module/accel/dsa/accel_dsa_rpc.o 00:01:48.522 CC module/accel/dsa/accel_dsa.o 00:01:48.522 CC module/sock/posix/posix.o 00:01:48.522 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:48.522 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:48.522 CC module/accel/ioat/accel_ioat.o 00:01:48.522 CC module/keyring/linux/keyring.o 00:01:48.522 CC module/accel/ioat/accel_ioat_rpc.o 00:01:48.522 CC module/keyring/linux/keyring_rpc.o 00:01:48.522 CC module/scheduler/gscheduler/gscheduler.o 00:01:48.522 CC module/accel/error/accel_error.o 00:01:48.522 CC module/blob/bdev/blob_bdev.o 00:01:48.522 CC module/accel/error/accel_error_rpc.o 00:01:48.522 SO libspdk_env_dpdk_rpc.so.6.0 00:01:48.522 CC module/accel/iaa/accel_iaa.o 00:01:48.522 CC module/accel/iaa/accel_iaa_rpc.o 00:01:48.522 SYMLINK libspdk_env_dpdk_rpc.so 00:01:48.781 LIB libspdk_keyring_file.a 00:01:48.781 LIB libspdk_scheduler_gscheduler.a 00:01:48.781 LIB libspdk_keyring_linux.a 00:01:48.781 LIB libspdk_scheduler_dpdk_governor.a 00:01:48.781 LIB libspdk_accel_error.a 00:01:48.781 LIB libspdk_scheduler_dynamic.a 00:01:48.781 LIB libspdk_accel_ioat.a 00:01:48.781 SO libspdk_scheduler_gscheduler.so.4.0 00:01:48.781 SO libspdk_keyring_file.so.1.0 00:01:48.781 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:48.781 SO libspdk_scheduler_dynamic.so.4.0 00:01:48.781 SO libspdk_keyring_linux.so.1.0 00:01:48.781 SO libspdk_accel_ioat.so.6.0 00:01:48.781 LIB libspdk_accel_dsa.a 00:01:48.781 SO libspdk_accel_error.so.2.0 00:01:48.781 LIB libspdk_accel_iaa.a 00:01:48.781 SYMLINK libspdk_scheduler_gscheduler.so 00:01:48.781 LIB libspdk_blob_bdev.a 00:01:48.781 SYMLINK libspdk_keyring_file.so 00:01:48.781 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:48.781 SO libspdk_accel_dsa.so.5.0 00:01:48.781 SYMLINK libspdk_scheduler_dynamic.so 00:01:48.781 SYMLINK libspdk_keyring_linux.so 00:01:48.781 SO libspdk_accel_iaa.so.3.0 00:01:48.781 SYMLINK libspdk_accel_ioat.so 00:01:48.781 SYMLINK libspdk_accel_error.so 00:01:48.781 SO libspdk_blob_bdev.so.11.0 00:01:48.781 SYMLINK libspdk_accel_dsa.so 00:01:48.781 SYMLINK libspdk_accel_iaa.so 00:01:48.781 SYMLINK libspdk_blob_bdev.so 00:01:49.042 LIB libspdk_sock_posix.a 00:01:49.042 SO libspdk_sock_posix.so.6.0 00:01:49.300 SYMLINK libspdk_sock_posix.so 00:01:49.300 CC module/bdev/split/vbdev_split.o 00:01:49.300 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:49.300 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:49.300 CC module/bdev/split/vbdev_split_rpc.o 00:01:49.300 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:49.300 CC module/bdev/nvme/bdev_nvme.o 00:01:49.300 CC module/bdev/nvme/nvme_rpc.o 00:01:49.300 CC module/blobfs/bdev/blobfs_bdev.o 00:01:49.300 CC module/bdev/nvme/bdev_mdns_client.o 00:01:49.300 CC module/bdev/null/bdev_null_rpc.o 00:01:49.300 CC module/bdev/null/bdev_null.o 00:01:49.300 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:49.300 CC module/bdev/iscsi/bdev_iscsi.o 00:01:49.300 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:49.300 CC module/bdev/nvme/vbdev_opal.o 00:01:49.300 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:49.300 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:49.300 CC module/bdev/delay/vbdev_delay.o 00:01:49.300 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:49.300 CC module/bdev/error/vbdev_error_rpc.o 00:01:49.300 CC module/bdev/error/vbdev_error.o 00:01:49.300 CC module/bdev/lvol/vbdev_lvol.o 00:01:49.300 CC module/bdev/raid/bdev_raid.o 00:01:49.300 CC module/bdev/malloc/bdev_malloc.o 00:01:49.300 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:49.300 CC module/bdev/raid/bdev_raid_sb.o 00:01:49.300 CC module/bdev/raid/bdev_raid_rpc.o 00:01:49.300 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:49.300 CC module/bdev/raid/raid0.o 00:01:49.300 CC module/bdev/raid/raid1.o 00:01:49.300 CC module/bdev/raid/concat.o 00:01:49.300 CC module/bdev/passthru/vbdev_passthru.o 00:01:49.300 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:49.300 CC module/bdev/ftl/bdev_ftl.o 00:01:49.300 CC module/bdev/aio/bdev_aio.o 00:01:49.300 CC module/bdev/aio/bdev_aio_rpc.o 00:01:49.300 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:49.300 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:49.300 CC module/bdev/gpt/gpt.o 00:01:49.300 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:49.300 CC module/bdev/gpt/vbdev_gpt.o 00:01:49.300 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:49.557 LIB libspdk_bdev_split.a 00:01:49.557 SO libspdk_bdev_split.so.6.0 00:01:49.557 LIB libspdk_blobfs_bdev.a 00:01:49.557 LIB libspdk_bdev_null.a 00:01:49.815 SO libspdk_bdev_null.so.6.0 00:01:49.815 SO libspdk_blobfs_bdev.so.6.0 00:01:49.815 LIB libspdk_bdev_passthru.a 00:01:49.815 LIB libspdk_bdev_zone_block.a 00:01:49.815 LIB libspdk_bdev_ftl.a 00:01:49.815 SYMLINK libspdk_bdev_split.so 00:01:49.815 LIB libspdk_bdev_gpt.a 00:01:49.815 SO libspdk_bdev_zone_block.so.6.0 00:01:49.815 LIB libspdk_bdev_delay.a 00:01:49.815 SO libspdk_bdev_ftl.so.6.0 00:01:49.815 SO libspdk_bdev_passthru.so.6.0 00:01:49.815 SO libspdk_bdev_gpt.so.6.0 00:01:49.815 SYMLINK libspdk_blobfs_bdev.so 00:01:49.815 SYMLINK libspdk_bdev_null.so 00:01:49.815 LIB libspdk_bdev_iscsi.a 00:01:49.815 LIB libspdk_bdev_error.a 00:01:49.815 SO libspdk_bdev_delay.so.6.0 00:01:49.815 SO libspdk_bdev_iscsi.so.6.0 00:01:49.815 SO libspdk_bdev_error.so.6.0 00:01:49.815 LIB libspdk_bdev_aio.a 00:01:49.815 SYMLINK libspdk_bdev_zone_block.so 00:01:49.815 SYMLINK libspdk_bdev_passthru.so 00:01:49.815 SYMLINK libspdk_bdev_ftl.so 00:01:49.815 SYMLINK libspdk_bdev_gpt.so 00:01:49.815 LIB libspdk_bdev_malloc.a 00:01:49.815 SO libspdk_bdev_aio.so.6.0 00:01:49.815 SYMLINK libspdk_bdev_delay.so 00:01:49.815 SYMLINK libspdk_bdev_iscsi.so 00:01:49.815 SO libspdk_bdev_malloc.so.6.0 00:01:49.815 SYMLINK libspdk_bdev_error.so 00:01:50.073 LIB libspdk_bdev_lvol.a 00:01:50.073 SYMLINK libspdk_bdev_aio.so 00:01:50.073 LIB libspdk_bdev_virtio.a 00:01:50.073 SYMLINK libspdk_bdev_malloc.so 00:01:50.073 SO libspdk_bdev_lvol.so.6.0 00:01:50.073 SO libspdk_bdev_virtio.so.6.0 00:01:50.073 SYMLINK libspdk_bdev_lvol.so 00:01:50.073 SYMLINK libspdk_bdev_virtio.so 00:01:50.332 LIB libspdk_bdev_raid.a 00:01:50.332 SO libspdk_bdev_raid.so.6.0 00:01:50.332 SYMLINK libspdk_bdev_raid.so 00:01:51.299 LIB libspdk_bdev_nvme.a 00:01:51.299 SO libspdk_bdev_nvme.so.7.0 00:01:51.299 SYMLINK libspdk_bdev_nvme.so 00:01:52.235 CC module/event/subsystems/sock/sock.o 00:01:52.235 CC module/event/subsystems/scheduler/scheduler.o 00:01:52.235 CC module/event/subsystems/vmd/vmd.o 00:01:52.235 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:52.235 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:52.235 CC module/event/subsystems/iobuf/iobuf.o 00:01:52.235 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:52.235 CC module/event/subsystems/keyring/keyring.o 00:01:52.235 LIB libspdk_event_scheduler.a 00:01:52.235 LIB libspdk_event_sock.a 00:01:52.235 LIB libspdk_event_vhost_blk.a 00:01:52.235 LIB libspdk_event_keyring.a 00:01:52.235 LIB libspdk_event_iobuf.a 00:01:52.235 LIB libspdk_event_vmd.a 00:01:52.235 SO libspdk_event_scheduler.so.4.0 00:01:52.235 SO libspdk_event_sock.so.5.0 00:01:52.235 SO libspdk_event_vhost_blk.so.3.0 00:01:52.235 SO libspdk_event_keyring.so.1.0 00:01:52.235 SO libspdk_event_vmd.so.6.0 00:01:52.235 SO libspdk_event_iobuf.so.3.0 00:01:52.235 SYMLINK libspdk_event_scheduler.so 00:01:52.235 SYMLINK libspdk_event_sock.so 00:01:52.235 SYMLINK libspdk_event_vhost_blk.so 00:01:52.235 SYMLINK libspdk_event_keyring.so 00:01:52.235 SYMLINK libspdk_event_vmd.so 00:01:52.235 SYMLINK libspdk_event_iobuf.so 00:01:52.803 CC module/event/subsystems/accel/accel.o 00:01:52.803 LIB libspdk_event_accel.a 00:01:52.803 SO libspdk_event_accel.so.6.0 00:01:53.063 SYMLINK libspdk_event_accel.so 00:01:53.322 CC module/event/subsystems/bdev/bdev.o 00:01:53.581 LIB libspdk_event_bdev.a 00:01:53.581 SO libspdk_event_bdev.so.6.0 00:01:53.581 SYMLINK libspdk_event_bdev.so 00:01:53.840 CC module/event/subsystems/ublk/ublk.o 00:01:53.840 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:53.840 CC module/event/subsystems/nbd/nbd.o 00:01:53.840 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:54.099 CC module/event/subsystems/scsi/scsi.o 00:01:54.099 LIB libspdk_event_nbd.a 00:01:54.099 LIB libspdk_event_ublk.a 00:01:54.099 LIB libspdk_event_scsi.a 00:01:54.099 SO libspdk_event_nbd.so.6.0 00:01:54.099 SO libspdk_event_ublk.so.3.0 00:01:54.099 SO libspdk_event_scsi.so.6.0 00:01:54.099 LIB libspdk_event_nvmf.a 00:01:54.099 SYMLINK libspdk_event_ublk.so 00:01:54.099 SYMLINK libspdk_event_nbd.so 00:01:54.358 SO libspdk_event_nvmf.so.6.0 00:01:54.358 SYMLINK libspdk_event_scsi.so 00:01:54.358 SYMLINK libspdk_event_nvmf.so 00:01:54.617 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:54.617 CC module/event/subsystems/iscsi/iscsi.o 00:01:54.876 LIB libspdk_event_vhost_scsi.a 00:01:54.876 LIB libspdk_event_iscsi.a 00:01:54.876 SO libspdk_event_vhost_scsi.so.3.0 00:01:54.876 SO libspdk_event_iscsi.so.6.0 00:01:54.876 SYMLINK libspdk_event_vhost_scsi.so 00:01:54.876 SYMLINK libspdk_event_iscsi.so 00:01:55.135 SO libspdk.so.6.0 00:01:55.135 SYMLINK libspdk.so 00:01:55.393 CC app/trace_record/trace_record.o 00:01:55.393 CXX app/trace/trace.o 00:01:55.393 CC app/spdk_nvme_discover/discovery_aer.o 00:01:55.393 CC app/spdk_nvme_perf/perf.o 00:01:55.393 CC app/spdk_nvme_identify/identify.o 00:01:55.393 CC test/rpc_client/rpc_client_test.o 00:01:55.393 CC app/spdk_lspci/spdk_lspci.o 00:01:55.393 TEST_HEADER include/spdk/accel_module.h 00:01:55.393 TEST_HEADER include/spdk/accel.h 00:01:55.393 TEST_HEADER include/spdk/barrier.h 00:01:55.393 TEST_HEADER include/spdk/assert.h 00:01:55.393 TEST_HEADER include/spdk/bdev.h 00:01:55.393 TEST_HEADER include/spdk/base64.h 00:01:55.393 TEST_HEADER include/spdk/bdev_module.h 00:01:55.393 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:55.393 TEST_HEADER include/spdk/bdev_zone.h 00:01:55.393 TEST_HEADER include/spdk/bit_array.h 00:01:55.393 TEST_HEADER include/spdk/bit_pool.h 00:01:55.393 TEST_HEADER include/spdk/blob_bdev.h 00:01:55.393 TEST_HEADER include/spdk/blobfs.h 00:01:55.393 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:55.393 TEST_HEADER include/spdk/config.h 00:01:55.393 TEST_HEADER include/spdk/blob.h 00:01:55.393 TEST_HEADER include/spdk/cpuset.h 00:01:55.393 CC app/spdk_top/spdk_top.o 00:01:55.393 TEST_HEADER include/spdk/conf.h 00:01:55.393 TEST_HEADER include/spdk/crc32.h 00:01:55.393 TEST_HEADER include/spdk/crc64.h 00:01:55.393 TEST_HEADER include/spdk/crc16.h 00:01:55.393 TEST_HEADER include/spdk/dif.h 00:01:55.393 TEST_HEADER include/spdk/dma.h 00:01:55.393 TEST_HEADER include/spdk/endian.h 00:01:55.393 TEST_HEADER include/spdk/env.h 00:01:55.393 TEST_HEADER include/spdk/event.h 00:01:55.393 TEST_HEADER include/spdk/env_dpdk.h 00:01:55.393 TEST_HEADER include/spdk/fd.h 00:01:55.393 TEST_HEADER include/spdk/fd_group.h 00:01:55.393 TEST_HEADER include/spdk/file.h 00:01:55.393 TEST_HEADER include/spdk/ftl.h 00:01:55.393 TEST_HEADER include/spdk/gpt_spec.h 00:01:55.393 TEST_HEADER include/spdk/hexlify.h 00:01:55.393 TEST_HEADER include/spdk/idxd.h 00:01:55.393 TEST_HEADER include/spdk/histogram_data.h 00:01:55.393 TEST_HEADER include/spdk/init.h 00:01:55.393 TEST_HEADER include/spdk/idxd_spec.h 00:01:55.393 TEST_HEADER include/spdk/ioat.h 00:01:55.393 TEST_HEADER include/spdk/ioat_spec.h 00:01:55.393 TEST_HEADER include/spdk/iscsi_spec.h 00:01:55.393 TEST_HEADER include/spdk/json.h 00:01:55.393 TEST_HEADER include/spdk/jsonrpc.h 00:01:55.393 TEST_HEADER include/spdk/keyring.h 00:01:55.393 TEST_HEADER include/spdk/keyring_module.h 00:01:55.393 CC app/iscsi_tgt/iscsi_tgt.o 00:01:55.393 TEST_HEADER include/spdk/likely.h 00:01:55.393 TEST_HEADER include/spdk/log.h 00:01:55.393 TEST_HEADER include/spdk/lvol.h 00:01:55.393 CC app/nvmf_tgt/nvmf_main.o 00:01:55.393 TEST_HEADER include/spdk/memory.h 00:01:55.393 CC app/spdk_dd/spdk_dd.o 00:01:55.393 TEST_HEADER include/spdk/mmio.h 00:01:55.393 TEST_HEADER include/spdk/notify.h 00:01:55.393 TEST_HEADER include/spdk/nbd.h 00:01:55.393 TEST_HEADER include/spdk/nvme.h 00:01:55.393 TEST_HEADER include/spdk/nvme_intel.h 00:01:55.393 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:55.393 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:55.393 TEST_HEADER include/spdk/nvme_spec.h 00:01:55.393 TEST_HEADER include/spdk/nvme_zns.h 00:01:55.393 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:55.393 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:55.393 TEST_HEADER include/spdk/nvmf.h 00:01:55.393 TEST_HEADER include/spdk/nvmf_spec.h 00:01:55.393 TEST_HEADER include/spdk/nvmf_transport.h 00:01:55.394 TEST_HEADER include/spdk/opal.h 00:01:55.394 TEST_HEADER include/spdk/opal_spec.h 00:01:55.394 TEST_HEADER include/spdk/pci_ids.h 00:01:55.394 TEST_HEADER include/spdk/pipe.h 00:01:55.394 TEST_HEADER include/spdk/queue.h 00:01:55.394 TEST_HEADER include/spdk/reduce.h 00:01:55.394 TEST_HEADER include/spdk/rpc.h 00:01:55.394 TEST_HEADER include/spdk/scheduler.h 00:01:55.394 TEST_HEADER include/spdk/scsi.h 00:01:55.394 TEST_HEADER include/spdk/scsi_spec.h 00:01:55.394 TEST_HEADER include/spdk/sock.h 00:01:55.394 TEST_HEADER include/spdk/stdinc.h 00:01:55.394 TEST_HEADER include/spdk/string.h 00:01:55.394 TEST_HEADER include/spdk/thread.h 00:01:55.394 TEST_HEADER include/spdk/trace_parser.h 00:01:55.394 TEST_HEADER include/spdk/trace.h 00:01:55.394 TEST_HEADER include/spdk/tree.h 00:01:55.394 TEST_HEADER include/spdk/ublk.h 00:01:55.394 TEST_HEADER include/spdk/util.h 00:01:55.394 TEST_HEADER include/spdk/uuid.h 00:01:55.394 TEST_HEADER include/spdk/version.h 00:01:55.660 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:55.660 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:55.660 TEST_HEADER include/spdk/vhost.h 00:01:55.660 TEST_HEADER include/spdk/vmd.h 00:01:55.660 TEST_HEADER include/spdk/xor.h 00:01:55.660 TEST_HEADER include/spdk/zipf.h 00:01:55.660 CXX test/cpp_headers/accel.o 00:01:55.660 CXX test/cpp_headers/accel_module.o 00:01:55.660 CXX test/cpp_headers/assert.o 00:01:55.660 CXX test/cpp_headers/barrier.o 00:01:55.660 CXX test/cpp_headers/base64.o 00:01:55.660 CXX test/cpp_headers/bdev.o 00:01:55.660 CC app/spdk_tgt/spdk_tgt.o 00:01:55.660 CXX test/cpp_headers/bdev_module.o 00:01:55.660 CXX test/cpp_headers/bdev_zone.o 00:01:55.660 CXX test/cpp_headers/bit_array.o 00:01:55.660 CXX test/cpp_headers/bit_pool.o 00:01:55.660 CXX test/cpp_headers/blob_bdev.o 00:01:55.660 CXX test/cpp_headers/blobfs_bdev.o 00:01:55.660 CXX test/cpp_headers/blobfs.o 00:01:55.660 CXX test/cpp_headers/blob.o 00:01:55.660 CXX test/cpp_headers/conf.o 00:01:55.660 CXX test/cpp_headers/config.o 00:01:55.660 CXX test/cpp_headers/cpuset.o 00:01:55.660 CXX test/cpp_headers/crc16.o 00:01:55.660 CXX test/cpp_headers/crc32.o 00:01:55.660 CXX test/cpp_headers/crc64.o 00:01:55.660 CXX test/cpp_headers/dif.o 00:01:55.660 CXX test/cpp_headers/dma.o 00:01:55.660 CXX test/cpp_headers/endian.o 00:01:55.660 CXX test/cpp_headers/env_dpdk.o 00:01:55.660 CXX test/cpp_headers/env.o 00:01:55.660 CXX test/cpp_headers/event.o 00:01:55.660 CXX test/cpp_headers/fd.o 00:01:55.660 CXX test/cpp_headers/fd_group.o 00:01:55.660 CXX test/cpp_headers/file.o 00:01:55.660 CXX test/cpp_headers/ftl.o 00:01:55.660 CXX test/cpp_headers/gpt_spec.o 00:01:55.660 CXX test/cpp_headers/hexlify.o 00:01:55.660 CXX test/cpp_headers/histogram_data.o 00:01:55.660 CXX test/cpp_headers/idxd.o 00:01:55.660 CXX test/cpp_headers/idxd_spec.o 00:01:55.660 CXX test/cpp_headers/init.o 00:01:55.660 CXX test/cpp_headers/ioat.o 00:01:55.660 CXX test/cpp_headers/ioat_spec.o 00:01:55.660 CXX test/cpp_headers/iscsi_spec.o 00:01:55.660 CXX test/cpp_headers/json.o 00:01:55.660 CXX test/cpp_headers/keyring.o 00:01:55.660 CXX test/cpp_headers/jsonrpc.o 00:01:55.660 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:55.660 CC examples/util/zipf/zipf.o 00:01:55.660 CXX test/cpp_headers/keyring_module.o 00:01:55.660 CC test/thread/poller_perf/poller_perf.o 00:01:55.660 CC test/env/vtophys/vtophys.o 00:01:55.660 CC test/env/pci/pci_ut.o 00:01:55.660 CC app/fio/nvme/fio_plugin.o 00:01:55.660 CC test/app/stub/stub.o 00:01:55.660 CC test/env/memory/memory_ut.o 00:01:55.660 CC examples/ioat/verify/verify.o 00:01:55.660 CC examples/ioat/perf/perf.o 00:01:55.660 CC test/app/histogram_perf/histogram_perf.o 00:01:55.660 CC test/dma/test_dma/test_dma.o 00:01:55.660 CC app/fio/bdev/fio_plugin.o 00:01:55.660 CC test/app/jsoncat/jsoncat.o 00:01:55.660 LINK spdk_lspci 00:01:55.660 CC test/app/bdev_svc/bdev_svc.o 00:01:55.922 LINK rpc_client_test 00:01:55.922 CC test/env/mem_callbacks/mem_callbacks.o 00:01:55.922 LINK spdk_nvme_discover 00:01:55.922 LINK nvmf_tgt 00:01:55.922 LINK interrupt_tgt 00:01:55.922 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:55.922 LINK spdk_trace_record 00:01:55.922 LINK vtophys 00:01:55.922 LINK zipf 00:01:55.922 LINK poller_perf 00:01:55.922 LINK iscsi_tgt 00:01:55.922 LINK jsoncat 00:01:55.922 LINK histogram_perf 00:01:56.195 CXX test/cpp_headers/likely.o 00:01:56.195 CXX test/cpp_headers/log.o 00:01:56.195 CXX test/cpp_headers/lvol.o 00:01:56.195 CXX test/cpp_headers/memory.o 00:01:56.195 CXX test/cpp_headers/mmio.o 00:01:56.195 LINK env_dpdk_post_init 00:01:56.195 LINK stub 00:01:56.195 CXX test/cpp_headers/nbd.o 00:01:56.195 CXX test/cpp_headers/notify.o 00:01:56.195 CXX test/cpp_headers/nvme.o 00:01:56.195 CXX test/cpp_headers/nvme_intel.o 00:01:56.195 CXX test/cpp_headers/nvme_ocssd.o 00:01:56.195 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:56.195 CXX test/cpp_headers/nvme_spec.o 00:01:56.195 CXX test/cpp_headers/nvme_zns.o 00:01:56.195 CXX test/cpp_headers/nvmf_cmd.o 00:01:56.195 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:56.195 CXX test/cpp_headers/nvmf.o 00:01:56.195 CXX test/cpp_headers/nvmf_spec.o 00:01:56.195 CXX test/cpp_headers/nvmf_transport.o 00:01:56.195 LINK spdk_tgt 00:01:56.195 CXX test/cpp_headers/opal.o 00:01:56.195 LINK ioat_perf 00:01:56.195 CXX test/cpp_headers/opal_spec.o 00:01:56.195 CXX test/cpp_headers/pci_ids.o 00:01:56.195 CXX test/cpp_headers/pipe.o 00:01:56.195 CXX test/cpp_headers/queue.o 00:01:56.195 CXX test/cpp_headers/reduce.o 00:01:56.195 CXX test/cpp_headers/rpc.o 00:01:56.195 CXX test/cpp_headers/scheduler.o 00:01:56.195 CXX test/cpp_headers/scsi.o 00:01:56.195 CXX test/cpp_headers/scsi_spec.o 00:01:56.195 CXX test/cpp_headers/sock.o 00:01:56.195 CXX test/cpp_headers/stdinc.o 00:01:56.195 CXX test/cpp_headers/thread.o 00:01:56.195 CXX test/cpp_headers/string.o 00:01:56.195 CXX test/cpp_headers/trace.o 00:01:56.195 CXX test/cpp_headers/trace_parser.o 00:01:56.195 CXX test/cpp_headers/tree.o 00:01:56.195 CXX test/cpp_headers/ublk.o 00:01:56.195 CXX test/cpp_headers/uuid.o 00:01:56.195 CXX test/cpp_headers/util.o 00:01:56.195 CXX test/cpp_headers/version.o 00:01:56.195 CXX test/cpp_headers/vfio_user_pci.o 00:01:56.195 LINK bdev_svc 00:01:56.195 LINK verify 00:01:56.195 CXX test/cpp_headers/vfio_user_spec.o 00:01:56.195 CXX test/cpp_headers/vhost.o 00:01:56.195 CXX test/cpp_headers/vmd.o 00:01:56.195 LINK spdk_dd 00:01:56.195 CXX test/cpp_headers/xor.o 00:01:56.195 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:56.195 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:56.195 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:56.195 CXX test/cpp_headers/zipf.o 00:01:56.460 LINK spdk_trace 00:01:56.460 LINK pci_ut 00:01:56.460 LINK test_dma 00:01:56.460 LINK spdk_nvme 00:01:56.460 LINK nvme_fuzz 00:01:56.720 CC test/event/event_perf/event_perf.o 00:01:56.720 LINK spdk_bdev 00:01:56.720 CC test/event/reactor/reactor.o 00:01:56.720 CC test/event/reactor_perf/reactor_perf.o 00:01:56.720 CC examples/idxd/perf/perf.o 00:01:56.720 CC examples/vmd/lsvmd/lsvmd.o 00:01:56.720 CC examples/vmd/led/led.o 00:01:56.720 CC test/event/app_repeat/app_repeat.o 00:01:56.720 CC examples/sock/hello_world/hello_sock.o 00:01:56.720 CC examples/thread/thread/thread_ex.o 00:01:56.720 CC test/event/scheduler/scheduler.o 00:01:56.720 LINK spdk_nvme_perf 00:01:56.720 LINK spdk_top 00:01:56.720 LINK lsvmd 00:01:56.720 LINK reactor 00:01:56.720 LINK reactor_perf 00:01:56.720 LINK led 00:01:56.720 LINK vhost_fuzz 00:01:56.720 LINK mem_callbacks 00:01:56.720 LINK event_perf 00:01:56.720 LINK app_repeat 00:01:56.720 LINK spdk_nvme_identify 00:01:56.720 CC app/vhost/vhost.o 00:01:56.977 LINK hello_sock 00:01:56.977 LINK scheduler 00:01:56.977 LINK idxd_perf 00:01:56.977 LINK thread 00:01:56.977 CC test/nvme/compliance/nvme_compliance.o 00:01:56.977 CC test/nvme/cuse/cuse.o 00:01:56.977 CC test/nvme/reset/reset.o 00:01:56.977 CC test/nvme/err_injection/err_injection.o 00:01:56.977 CC test/nvme/fdp/fdp.o 00:01:56.977 CC test/nvme/boot_partition/boot_partition.o 00:01:56.977 CC test/nvme/sgl/sgl.o 00:01:56.977 CC test/nvme/fused_ordering/fused_ordering.o 00:01:56.977 CC test/nvme/simple_copy/simple_copy.o 00:01:56.977 CC test/nvme/e2edp/nvme_dp.o 00:01:56.977 CC test/nvme/reserve/reserve.o 00:01:56.977 CC test/nvme/startup/startup.o 00:01:56.977 CC test/nvme/overhead/overhead.o 00:01:56.977 CC test/nvme/connect_stress/connect_stress.o 00:01:56.977 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:56.977 LINK vhost 00:01:56.977 CC test/nvme/aer/aer.o 00:01:56.977 CC test/accel/dif/dif.o 00:01:56.977 CC test/blobfs/mkfs/mkfs.o 00:01:57.234 CC test/lvol/esnap/esnap.o 00:01:57.234 LINK memory_ut 00:01:57.234 LINK boot_partition 00:01:57.234 LINK doorbell_aers 00:01:57.234 LINK connect_stress 00:01:57.234 LINK startup 00:01:57.234 LINK fused_ordering 00:01:57.234 LINK err_injection 00:01:57.234 LINK reserve 00:01:57.234 LINK simple_copy 00:01:57.234 LINK mkfs 00:01:57.234 LINK sgl 00:01:57.234 LINK reset 00:01:57.234 LINK nvme_dp 00:01:57.234 CC examples/nvme/hotplug/hotplug.o 00:01:57.234 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:57.234 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:57.234 CC examples/nvme/abort/abort.o 00:01:57.234 CC examples/nvme/reconnect/reconnect.o 00:01:57.234 LINK overhead 00:01:57.234 CC examples/nvme/hello_world/hello_world.o 00:01:57.234 CC examples/nvme/arbitration/arbitration.o 00:01:57.234 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:57.234 LINK aer 00:01:57.234 LINK nvme_compliance 00:01:57.234 LINK fdp 00:01:57.492 CC examples/blob/cli/blobcli.o 00:01:57.492 CC examples/accel/perf/accel_perf.o 00:01:57.492 CC examples/blob/hello_world/hello_blob.o 00:01:57.492 LINK dif 00:01:57.492 LINK pmr_persistence 00:01:57.492 LINK cmb_copy 00:01:57.492 LINK hello_world 00:01:57.492 LINK hotplug 00:01:57.492 LINK arbitration 00:01:57.492 LINK reconnect 00:01:57.750 LINK abort 00:01:57.750 LINK hello_blob 00:01:57.750 LINK nvme_manage 00:01:57.750 LINK accel_perf 00:01:57.750 LINK blobcli 00:01:58.008 LINK iscsi_fuzz 00:01:58.008 CC test/bdev/bdevio/bdevio.o 00:01:58.008 LINK cuse 00:01:58.267 CC examples/bdev/hello_world/hello_bdev.o 00:01:58.267 CC examples/bdev/bdevperf/bdevperf.o 00:01:58.267 LINK bdevio 00:01:58.526 LINK hello_bdev 00:01:58.818 LINK bdevperf 00:01:59.752 CC examples/nvmf/nvmf/nvmf.o 00:01:59.752 LINK nvmf 00:02:00.740 LINK esnap 00:02:00.998 00:02:00.998 real 0m51.809s 00:02:00.998 user 6m48.095s 00:02:00.998 sys 3m24.281s 00:02:00.998 13:32:27 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:00.998 13:32:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:00.998 ************************************ 00:02:00.998 END TEST make 00:02:00.998 ************************************ 00:02:00.998 13:32:27 -- common/autotest_common.sh@1142 -- $ return 0 00:02:00.998 13:32:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:00.998 13:32:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:00.998 13:32:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:00.998 13:32:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.998 13:32:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:00.998 13:32:27 -- pm/common@44 -- $ pid=2240635 00:02:00.998 13:32:27 -- pm/common@50 -- $ kill -TERM 2240635 00:02:00.998 13:32:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.998 13:32:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:00.998 13:32:27 -- pm/common@44 -- $ pid=2240637 00:02:00.998 13:32:27 -- pm/common@50 -- $ kill -TERM 2240637 00:02:00.998 13:32:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.998 13:32:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:00.998 13:32:27 -- pm/common@44 -- $ pid=2240639 00:02:00.998 13:32:27 -- pm/common@50 -- $ kill -TERM 2240639 00:02:00.998 13:32:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.998 13:32:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:00.998 13:32:27 -- pm/common@44 -- $ pid=2240663 00:02:00.998 13:32:27 -- pm/common@50 -- $ sudo -E kill -TERM 2240663 00:02:01.257 13:32:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:01.257 13:32:27 -- nvmf/common.sh@7 -- # uname -s 00:02:01.257 13:32:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:01.257 13:32:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:01.257 13:32:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:01.257 13:32:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:01.257 13:32:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:01.257 13:32:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:01.257 13:32:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:01.257 13:32:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:01.257 13:32:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:01.257 13:32:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:01.257 13:32:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:02:01.257 13:32:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:02:01.257 13:32:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:01.257 13:32:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:01.257 13:32:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:01.257 13:32:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:01.257 13:32:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:01.257 13:32:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:01.257 13:32:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.257 13:32:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.257 13:32:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.257 13:32:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.257 13:32:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.257 13:32:27 -- paths/export.sh@5 -- # export PATH 00:02:01.257 13:32:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.257 13:32:27 -- nvmf/common.sh@47 -- # : 0 00:02:01.257 13:32:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:01.257 13:32:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:01.257 13:32:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:01.257 13:32:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:01.257 13:32:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:01.257 13:32:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:01.257 13:32:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:01.257 13:32:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:01.257 13:32:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:01.257 13:32:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:01.257 13:32:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:01.257 13:32:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:01.257 13:32:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:01.257 13:32:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:01.257 13:32:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:01.257 13:32:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:01.257 13:32:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:01.257 13:32:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:01.257 13:32:27 -- spdk/autotest.sh@48 -- # udevadm_pid=2298159 00:02:01.257 13:32:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:01.257 13:32:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:01.257 13:32:27 -- pm/common@17 -- # local monitor 00:02:01.257 13:32:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.257 13:32:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.257 13:32:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.257 13:32:27 -- pm/common@21 -- # date +%s 00:02:01.257 13:32:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.257 13:32:27 -- pm/common@21 -- # date +%s 00:02:01.257 13:32:27 -- pm/common@25 -- # sleep 1 00:02:01.257 13:32:27 -- pm/common@21 -- # date +%s 00:02:01.257 13:32:27 -- pm/common@21 -- # date +%s 00:02:01.257 13:32:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043147 00:02:01.257 13:32:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043147 00:02:01.257 13:32:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043147 00:02:01.257 13:32:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043147 00:02:01.257 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043147_collect-vmstat.pm.log 00:02:01.257 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043147_collect-cpu-load.pm.log 00:02:01.257 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043147_collect-cpu-temp.pm.log 00:02:01.257 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043147_collect-bmc-pm.bmc.pm.log 00:02:02.194 13:32:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:02.194 13:32:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:02.194 13:32:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:02.194 13:32:28 -- common/autotest_common.sh@10 -- # set +x 00:02:02.194 13:32:28 -- spdk/autotest.sh@59 -- # create_test_list 00:02:02.194 13:32:28 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:02.194 13:32:28 -- common/autotest_common.sh@10 -- # set +x 00:02:02.452 13:32:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:02.452 13:32:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:02.452 13:32:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:02.453 13:32:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:02.453 13:32:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:02.453 13:32:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:02.453 13:32:28 -- common/autotest_common.sh@1455 -- # uname 00:02:02.453 13:32:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:02.453 13:32:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:02.453 13:32:28 -- common/autotest_common.sh@1475 -- # uname 00:02:02.453 13:32:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:02.453 13:32:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:02.453 13:32:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:02.453 13:32:28 -- spdk/autotest.sh@72 -- # hash lcov 00:02:02.453 13:32:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:02.453 13:32:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:02.453 --rc lcov_branch_coverage=1 00:02:02.453 --rc lcov_function_coverage=1 00:02:02.453 --rc genhtml_branch_coverage=1 00:02:02.453 --rc genhtml_function_coverage=1 00:02:02.453 --rc genhtml_legend=1 00:02:02.453 --rc geninfo_all_blocks=1 00:02:02.453 ' 00:02:02.453 13:32:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:02.453 --rc lcov_branch_coverage=1 00:02:02.453 --rc lcov_function_coverage=1 00:02:02.453 --rc genhtml_branch_coverage=1 00:02:02.453 --rc genhtml_function_coverage=1 00:02:02.453 --rc genhtml_legend=1 00:02:02.453 --rc geninfo_all_blocks=1 00:02:02.453 ' 00:02:02.453 13:32:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:02.453 --rc lcov_branch_coverage=1 00:02:02.453 --rc lcov_function_coverage=1 00:02:02.453 --rc genhtml_branch_coverage=1 00:02:02.453 --rc genhtml_function_coverage=1 00:02:02.453 --rc genhtml_legend=1 00:02:02.453 --rc geninfo_all_blocks=1 00:02:02.453 --no-external' 00:02:02.453 13:32:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:02.453 --rc lcov_branch_coverage=1 00:02:02.453 --rc lcov_function_coverage=1 00:02:02.453 --rc genhtml_branch_coverage=1 00:02:02.453 --rc genhtml_function_coverage=1 00:02:02.453 --rc genhtml_legend=1 00:02:02.453 --rc geninfo_all_blocks=1 00:02:02.453 --no-external' 00:02:02.453 13:32:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:02.453 lcov: LCOV version 1.14 00:02:02.453 13:32:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:14.657 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:14.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.633 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.633 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:24.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:24.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.166 13:32:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:27.166 13:32:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:27.166 13:32:53 -- common/autotest_common.sh@10 -- # set +x 00:02:27.166 13:32:53 -- spdk/autotest.sh@91 -- # rm -f 00:02:27.166 13:32:53 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.458 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:02:30.458 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:30.458 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:30.718 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:30.718 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:30.718 13:32:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:30.718 13:32:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:30.718 13:32:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:30.718 13:32:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:30.718 13:32:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:30.718 13:32:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:30.718 13:32:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:30.718 13:32:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:30.718 13:32:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:30.718 13:32:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:30.718 13:32:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:30.718 13:32:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:30.718 13:32:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:30.718 13:32:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:30.718 13:32:57 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:30.718 No valid GPT data, bailing 00:02:30.718 13:32:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:30.718 13:32:57 -- scripts/common.sh@391 -- # pt= 00:02:30.718 13:32:57 -- scripts/common.sh@392 -- # return 1 00:02:30.718 13:32:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:30.718 1+0 records in 00:02:30.718 1+0 records out 00:02:30.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00232245 s, 451 MB/s 00:02:30.718 13:32:57 -- spdk/autotest.sh@118 -- # sync 00:02:30.718 13:32:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:30.718 13:32:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:30.718 13:32:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:35.999 13:33:02 -- spdk/autotest.sh@124 -- # uname -s 00:02:35.999 13:33:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:35.999 13:33:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.999 13:33:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.999 13:33:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.999 13:33:02 -- common/autotest_common.sh@10 -- # set +x 00:02:35.999 ************************************ 00:02:35.999 START TEST setup.sh 00:02:35.999 ************************************ 00:02:35.999 13:33:02 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:36.258 * Looking for test storage... 00:02:36.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:36.258 13:33:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:36.258 13:33:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:36.258 13:33:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:36.258 13:33:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:36.258 13:33:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:36.258 13:33:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:36.258 ************************************ 00:02:36.258 START TEST acl 00:02:36.258 ************************************ 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:36.258 * Looking for test storage... 00:02:36.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:36.258 13:33:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:36.258 13:33:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:36.258 13:33:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.258 13:33:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.451 13:33:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:40.451 13:33:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:40.451 13:33:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:40.451 13:33:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:40.451 13:33:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.452 13:33:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:43.838 Hugepages 00:02:43.838 node hugesize free / total 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 00:02:43.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.838 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:43.839 13:33:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:43.839 13:33:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:43.839 13:33:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:43.839 13:33:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:43.839 ************************************ 00:02:43.839 START TEST denied 00:02:43.839 ************************************ 00:02:43.839 13:33:09 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:43.839 13:33:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:02:43.839 13:33:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:43.839 13:33:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:02:43.839 13:33:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.839 13:33:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:47.129 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.129 13:33:13 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.399 00:02:52.399 real 0m8.239s 00:02:52.399 user 0m2.629s 00:02:52.399 sys 0m4.881s 00:02:52.399 13:33:18 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.399 13:33:18 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:52.399 ************************************ 00:02:52.399 END TEST denied 00:02:52.399 ************************************ 00:02:52.399 13:33:18 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:52.399 13:33:18 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:52.399 13:33:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.399 13:33:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.399 13:33:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:52.399 ************************************ 00:02:52.399 START TEST allowed 00:02:52.399 ************************************ 00:02:52.399 13:33:18 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:52.399 13:33:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:02:52.399 13:33:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:52.399 13:33:18 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:02:52.399 13:33:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.399 13:33:18 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:00.524 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.524 13:33:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:00.524 13:33:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:00.524 13:33:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:00.524 13:33:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.524 13:33:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.720 00:03:04.720 real 0m12.179s 00:03:04.720 user 0m2.604s 00:03:04.720 sys 0m4.754s 00:03:04.720 13:33:30 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.720 13:33:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:04.720 ************************************ 00:03:04.720 END TEST allowed 00:03:04.720 ************************************ 00:03:04.720 13:33:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:04.720 00:03:04.720 real 0m27.890s 00:03:04.720 user 0m7.980s 00:03:04.720 sys 0m14.645s 00:03:04.720 13:33:30 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.720 13:33:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.720 ************************************ 00:03:04.720 END TEST acl 00:03:04.720 ************************************ 00:03:04.720 13:33:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:04.720 13:33:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.720 13:33:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.720 13:33:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.720 13:33:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.720 ************************************ 00:03:04.720 START TEST hugepages 00:03:04.720 ************************************ 00:03:04.720 13:33:30 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.720 * Looking for test storage... 00:03:04.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 72137072 kB' 'MemAvailable: 75486436 kB' 'Buffers: 2704 kB' 'Cached: 13365628 kB' 'SwapCached: 0 kB' 'Active: 10439192 kB' 'Inactive: 3465392 kB' 'Active(anon): 10001752 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539736 kB' 'Mapped: 197032 kB' 'Shmem: 9465500 kB' 'KReclaimable: 200352 kB' 'Slab: 611476 kB' 'SReclaimable: 200352 kB' 'SUnreclaim: 411124 kB' 'KernelStack: 16912 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 11417376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205812 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:04.721 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.722 13:33:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:04.722 13:33:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.722 13:33:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.722 13:33:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.722 ************************************ 00:03:04.722 START TEST default_setup 00:03:04.722 ************************************ 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.722 13:33:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:08.011 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.011 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.011 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.012 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.292 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.292 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:13.292 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:13.292 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74271084 kB' 'MemAvailable: 77620340 kB' 'Buffers: 2704 kB' 'Cached: 13365760 kB' 'SwapCached: 0 kB' 'Active: 10448308 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010868 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548624 kB' 'Mapped: 196224 kB' 'Shmem: 9465632 kB' 'KReclaimable: 200136 kB' 'Slab: 609664 kB' 'SReclaimable: 200136 kB' 'SUnreclaim: 409528 kB' 'KernelStack: 17104 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11426920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205952 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.293 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74274464 kB' 'MemAvailable: 77623720 kB' 'Buffers: 2704 kB' 'Cached: 13365764 kB' 'SwapCached: 0 kB' 'Active: 10448628 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011188 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548908 kB' 'Mapped: 196144 kB' 'Shmem: 9465636 kB' 'KReclaimable: 200136 kB' 'Slab: 609548 kB' 'SReclaimable: 200136 kB' 'SUnreclaim: 409412 kB' 'KernelStack: 17104 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11425456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205904 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.294 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74276492 kB' 'MemAvailable: 77625748 kB' 'Buffers: 2704 kB' 'Cached: 13365784 kB' 'SwapCached: 0 kB' 'Active: 10449504 kB' 'Inactive: 3465392 kB' 'Active(anon): 10012064 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549712 kB' 'Mapped: 196648 kB' 'Shmem: 9465656 kB' 'KReclaimable: 200136 kB' 'Slab: 609548 kB' 'SReclaimable: 200136 kB' 'SUnreclaim: 409412 kB' 'KernelStack: 16944 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11428712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205872 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.295 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.296 nr_hugepages=1024 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.296 resv_hugepages=0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.296 surplus_hugepages=0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.296 anon_hugepages=0 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.296 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74272668 kB' 'MemAvailable: 77621924 kB' 'Buffers: 2704 kB' 'Cached: 13365804 kB' 'SwapCached: 0 kB' 'Active: 10454256 kB' 'Inactive: 3465392 kB' 'Active(anon): 10016816 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554408 kB' 'Mapped: 196648 kB' 'Shmem: 9465676 kB' 'KReclaimable: 200136 kB' 'Slab: 609548 kB' 'SReclaimable: 200136 kB' 'SUnreclaim: 409412 kB' 'KernelStack: 17120 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11433100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206036 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.297 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 42799036 kB' 'MemUsed: 5317944 kB' 'SwapCached: 0 kB' 'Active: 1737224 kB' 'Inactive: 216068 kB' 'Active(anon): 1475816 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1788904 kB' 'Mapped: 70416 kB' 'AnonPages: 167724 kB' 'Shmem: 1311428 kB' 'KernelStack: 8056 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111860 kB' 'Slab: 350192 kB' 'SReclaimable: 111860 kB' 'SUnreclaim: 238332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.298 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.299 node0=1024 expecting 1024 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.299 00:03:13.299 real 0m8.577s 00:03:13.299 user 0m1.468s 00:03:13.299 sys 0m2.356s 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.299 13:33:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:13.299 ************************************ 00:03:13.299 END TEST default_setup 00:03:13.299 ************************************ 00:03:13.299 13:33:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.299 13:33:39 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:13.299 13:33:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.299 13:33:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.299 13:33:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.299 ************************************ 00:03:13.299 START TEST per_node_1G_alloc 00:03:13.299 ************************************ 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.299 13:33:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:16.592 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.592 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.592 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74273420 kB' 'MemAvailable: 77622656 kB' 'Buffers: 2704 kB' 'Cached: 13365896 kB' 'SwapCached: 0 kB' 'Active: 10447728 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010288 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547248 kB' 'Mapped: 195256 kB' 'Shmem: 9465768 kB' 'KReclaimable: 200096 kB' 'Slab: 609712 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409616 kB' 'KernelStack: 16880 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11415776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206032 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.592 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74273068 kB' 'MemAvailable: 77622304 kB' 'Buffers: 2704 kB' 'Cached: 13365900 kB' 'SwapCached: 0 kB' 'Active: 10447016 kB' 'Inactive: 3465392 kB' 'Active(anon): 10009576 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547052 kB' 'Mapped: 195144 kB' 'Shmem: 9465772 kB' 'KReclaimable: 200096 kB' 'Slab: 609688 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409592 kB' 'KernelStack: 16896 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11415792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.593 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.594 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74273068 kB' 'MemAvailable: 77622304 kB' 'Buffers: 2704 kB' 'Cached: 13365900 kB' 'SwapCached: 0 kB' 'Active: 10447016 kB' 'Inactive: 3465392 kB' 'Active(anon): 10009576 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547052 kB' 'Mapped: 195144 kB' 'Shmem: 9465772 kB' 'KReclaimable: 200096 kB' 'Slab: 609688 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409592 kB' 'KernelStack: 16896 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11415816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.595 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.596 nr_hugepages=1024 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.596 resv_hugepages=0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.596 surplus_hugepages=0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.596 anon_hugepages=0 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.596 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74273348 kB' 'MemAvailable: 77622584 kB' 'Buffers: 2704 kB' 'Cached: 13365960 kB' 'SwapCached: 0 kB' 'Active: 10446788 kB' 'Inactive: 3465392 kB' 'Active(anon): 10009348 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546728 kB' 'Mapped: 195144 kB' 'Shmem: 9465832 kB' 'KReclaimable: 200096 kB' 'Slab: 609688 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409592 kB' 'KernelStack: 16896 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11415840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.597 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 43854132 kB' 'MemUsed: 4262848 kB' 'SwapCached: 0 kB' 'Active: 1736056 kB' 'Inactive: 216068 kB' 'Active(anon): 1474648 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789040 kB' 'Mapped: 69920 kB' 'AnonPages: 166220 kB' 'Shmem: 1311564 kB' 'KernelStack: 7960 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111836 kB' 'Slab: 350348 kB' 'SReclaimable: 111836 kB' 'SUnreclaim: 238512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.598 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.599 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176548 kB' 'MemFree: 30420468 kB' 'MemUsed: 13756080 kB' 'SwapCached: 0 kB' 'Active: 8710800 kB' 'Inactive: 3249324 kB' 'Active(anon): 8534768 kB' 'Inactive(anon): 0 kB' 'Active(file): 176032 kB' 'Inactive(file): 3249324 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11579648 kB' 'Mapped: 125224 kB' 'AnonPages: 380696 kB' 'Shmem: 8154292 kB' 'KernelStack: 8936 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88260 kB' 'Slab: 259340 kB' 'SReclaimable: 88260 kB' 'SUnreclaim: 171080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.600 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.601 node0=512 expecting 512 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.601 node1=512 expecting 512 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.601 00:03:16.601 real 0m3.540s 00:03:16.601 user 0m1.399s 00:03:16.601 sys 0m2.206s 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.601 13:33:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:16.601 ************************************ 00:03:16.601 END TEST per_node_1G_alloc 00:03:16.601 ************************************ 00:03:16.601 13:33:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.601 13:33:43 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:16.601 13:33:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.601 13:33:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.601 13:33:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.601 ************************************ 00:03:16.601 START TEST even_2G_alloc 00:03:16.601 ************************************ 00:03:16.601 13:33:43 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:16.601 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:16.601 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.601 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.602 13:33:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:19.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.941 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74270768 kB' 'MemAvailable: 77620004 kB' 'Buffers: 2704 kB' 'Cached: 13366064 kB' 'SwapCached: 0 kB' 'Active: 10448772 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011332 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548624 kB' 'Mapped: 195284 kB' 'Shmem: 9465936 kB' 'KReclaimable: 200096 kB' 'Slab: 609640 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409544 kB' 'KernelStack: 17088 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11416452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206096 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.942 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.247 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74274948 kB' 'MemAvailable: 77624184 kB' 'Buffers: 2704 kB' 'Cached: 13366064 kB' 'SwapCached: 0 kB' 'Active: 10447656 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010216 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547564 kB' 'Mapped: 195156 kB' 'Shmem: 9465936 kB' 'KReclaimable: 200096 kB' 'Slab: 609492 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409396 kB' 'KernelStack: 16896 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11416452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206048 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.248 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74275380 kB' 'MemAvailable: 77624616 kB' 'Buffers: 2704 kB' 'Cached: 13366084 kB' 'SwapCached: 0 kB' 'Active: 10447676 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010236 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547556 kB' 'Mapped: 195156 kB' 'Shmem: 9465956 kB' 'KReclaimable: 200096 kB' 'Slab: 609492 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409396 kB' 'KernelStack: 16880 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11416476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206032 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.249 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.250 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.251 nr_hugepages=1024 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.251 resv_hugepages=0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.251 surplus_hugepages=0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.251 anon_hugepages=0 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74275760 kB' 'MemAvailable: 77624996 kB' 'Buffers: 2704 kB' 'Cached: 13366104 kB' 'SwapCached: 0 kB' 'Active: 10447760 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010320 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547708 kB' 'Mapped: 195156 kB' 'Shmem: 9465976 kB' 'KReclaimable: 200096 kB' 'Slab: 609492 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409396 kB' 'KernelStack: 16912 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11416496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205984 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.251 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.252 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 43856552 kB' 'MemUsed: 4260428 kB' 'SwapCached: 0 kB' 'Active: 1736084 kB' 'Inactive: 216068 kB' 'Active(anon): 1474676 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789164 kB' 'Mapped: 69932 kB' 'AnonPages: 166224 kB' 'Shmem: 1311688 kB' 'KernelStack: 7944 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111836 kB' 'Slab: 350208 kB' 'SReclaimable: 111836 kB' 'SUnreclaim: 238372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.253 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176548 kB' 'MemFree: 30420076 kB' 'MemUsed: 13756472 kB' 'SwapCached: 0 kB' 'Active: 8711268 kB' 'Inactive: 3249324 kB' 'Active(anon): 8535236 kB' 'Inactive(anon): 0 kB' 'Active(file): 176032 kB' 'Inactive(file): 3249324 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11579684 kB' 'Mapped: 125224 kB' 'AnonPages: 380956 kB' 'Shmem: 8154328 kB' 'KernelStack: 8920 kB' 'PageTables: 4728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88260 kB' 'Slab: 259284 kB' 'SReclaimable: 88260 kB' 'SUnreclaim: 171024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.254 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.255 node0=512 expecting 512 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.255 node1=512 expecting 512 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.255 00:03:20.255 real 0m3.532s 00:03:20.255 user 0m1.361s 00:03:20.255 sys 0m2.268s 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.255 13:33:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.255 ************************************ 00:03:20.255 END TEST even_2G_alloc 00:03:20.255 ************************************ 00:03:20.255 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.255 13:33:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:20.255 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.255 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.255 13:33:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.255 ************************************ 00:03:20.255 START TEST odd_alloc 00:03:20.255 ************************************ 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.255 13:33:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:23.541 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.541 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.541 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.542 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74276908 kB' 'MemAvailable: 77626144 kB' 'Buffers: 2704 kB' 'Cached: 13366212 kB' 'SwapCached: 0 kB' 'Active: 10449344 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011904 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548672 kB' 'Mapped: 195336 kB' 'Shmem: 9466084 kB' 'KReclaimable: 200096 kB' 'Slab: 609420 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409324 kB' 'KernelStack: 16912 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 11417276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205984 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.542 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.806 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74276656 kB' 'MemAvailable: 77625892 kB' 'Buffers: 2704 kB' 'Cached: 13366212 kB' 'SwapCached: 0 kB' 'Active: 10448996 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011556 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548320 kB' 'Mapped: 195248 kB' 'Shmem: 9466084 kB' 'KReclaimable: 200096 kB' 'Slab: 609420 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409324 kB' 'KernelStack: 16896 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 11418312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.807 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.808 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74276500 kB' 'MemAvailable: 77625740 kB' 'Buffers: 2704 kB' 'Cached: 13366212 kB' 'SwapCached: 0 kB' 'Active: 10449704 kB' 'Inactive: 3465392 kB' 'Active(anon): 10012264 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549460 kB' 'Mapped: 195672 kB' 'Shmem: 9466084 kB' 'KReclaimable: 200104 kB' 'Slab: 609404 kB' 'SReclaimable: 200104 kB' 'SUnreclaim: 409300 kB' 'KernelStack: 16864 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 11419460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205968 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.809 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:23.810 nr_hugepages=1025 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.810 resv_hugepages=0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.810 surplus_hugepages=0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.810 anon_hugepages=0 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:23.810 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74276756 kB' 'MemAvailable: 77625992 kB' 'Buffers: 2704 kB' 'Cached: 13366212 kB' 'SwapCached: 0 kB' 'Active: 10453276 kB' 'Inactive: 3465392 kB' 'Active(anon): 10015836 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553564 kB' 'Mapped: 195672 kB' 'Shmem: 9466084 kB' 'KReclaimable: 200096 kB' 'Slab: 609396 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409300 kB' 'KernelStack: 16928 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 11423452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205972 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.811 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.812 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 43857200 kB' 'MemUsed: 4259780 kB' 'SwapCached: 0 kB' 'Active: 1735812 kB' 'Inactive: 216068 kB' 'Active(anon): 1474404 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789204 kB' 'Mapped: 70096 kB' 'AnonPages: 165804 kB' 'Shmem: 1311728 kB' 'KernelStack: 7928 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111828 kB' 'Slab: 349980 kB' 'SReclaimable: 111828 kB' 'SUnreclaim: 238152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.813 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176548 kB' 'MemFree: 30424616 kB' 'MemUsed: 13751932 kB' 'SwapCached: 0 kB' 'Active: 8711732 kB' 'Inactive: 3249324 kB' 'Active(anon): 8535700 kB' 'Inactive(anon): 0 kB' 'Active(file): 176032 kB' 'Inactive(file): 3249324 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11579788 kB' 'Mapped: 125224 kB' 'AnonPages: 381464 kB' 'Shmem: 8154432 kB' 'KernelStack: 8952 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88268 kB' 'Slab: 259416 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 171148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:23.815 node0=512 expecting 513 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:23.815 node1=513 expecting 512 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:23.815 00:03:23.815 real 0m3.523s 00:03:23.815 user 0m1.327s 00:03:23.815 sys 0m2.284s 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.815 13:33:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.815 ************************************ 00:03:23.815 END TEST odd_alloc 00:03:23.815 ************************************ 00:03:23.815 13:33:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.815 13:33:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:23.815 13:33:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.815 13:33:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.815 13:33:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.815 ************************************ 00:03:23.815 START TEST custom_alloc 00:03:23.815 ************************************ 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.815 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:23.816 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.075 13:33:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:27.371 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.371 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73216308 kB' 'MemAvailable: 76565544 kB' 'Buffers: 2704 kB' 'Cached: 13366356 kB' 'SwapCached: 0 kB' 'Active: 10448920 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011480 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547980 kB' 'Mapped: 195336 kB' 'Shmem: 9466228 kB' 'KReclaimable: 200096 kB' 'Slab: 608980 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 408884 kB' 'KernelStack: 16928 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 11417680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206048 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.371 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.372 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73218796 kB' 'MemAvailable: 76568032 kB' 'Buffers: 2704 kB' 'Cached: 13366360 kB' 'SwapCached: 0 kB' 'Active: 10448072 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010632 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547572 kB' 'Mapped: 195184 kB' 'Shmem: 9466232 kB' 'KReclaimable: 200096 kB' 'Slab: 608960 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 408864 kB' 'KernelStack: 16912 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 11417696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205984 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.373 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.374 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73218872 kB' 'MemAvailable: 76568108 kB' 'Buffers: 2704 kB' 'Cached: 13366380 kB' 'SwapCached: 0 kB' 'Active: 10447876 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010436 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547332 kB' 'Mapped: 195184 kB' 'Shmem: 9466252 kB' 'KReclaimable: 200096 kB' 'Slab: 608968 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 408872 kB' 'KernelStack: 16880 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 11417716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205984 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.375 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.376 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:27.377 nr_hugepages=1536 00:03:27.377 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.377 resv_hugepages=0 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.378 surplus_hugepages=0 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.378 anon_hugepages=0 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 73227952 kB' 'MemAvailable: 76577188 kB' 'Buffers: 2704 kB' 'Cached: 13366380 kB' 'SwapCached: 0 kB' 'Active: 10448380 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010940 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547332 kB' 'Mapped: 195184 kB' 'Shmem: 9466252 kB' 'KReclaimable: 200096 kB' 'Slab: 608968 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 408872 kB' 'KernelStack: 16880 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 11417740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 205984 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.378 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.379 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 43844900 kB' 'MemUsed: 4272080 kB' 'SwapCached: 0 kB' 'Active: 1737096 kB' 'Inactive: 216068 kB' 'Active(anon): 1475688 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789324 kB' 'Mapped: 69960 kB' 'AnonPages: 166988 kB' 'Shmem: 1311848 kB' 'KernelStack: 7944 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111828 kB' 'Slab: 349512 kB' 'SReclaimable: 111828 kB' 'SUnreclaim: 237684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.380 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.381 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176548 kB' 'MemFree: 29384112 kB' 'MemUsed: 14792436 kB' 'SwapCached: 0 kB' 'Active: 8710632 kB' 'Inactive: 3249324 kB' 'Active(anon): 8534600 kB' 'Inactive(anon): 0 kB' 'Active(file): 176032 kB' 'Inactive(file): 3249324 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11579820 kB' 'Mapped: 125224 kB' 'AnonPages: 380188 kB' 'Shmem: 8154464 kB' 'KernelStack: 8952 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88268 kB' 'Slab: 259432 kB' 'SReclaimable: 88268 kB' 'SUnreclaim: 171164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.383 node0=512 expecting 512 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:27.383 node1=1024 expecting 1024 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:27.383 00:03:27.383 real 0m3.511s 00:03:27.383 user 0m1.337s 00:03:27.383 sys 0m2.267s 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.383 13:33:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.383 ************************************ 00:03:27.383 END TEST custom_alloc 00:03:27.383 ************************************ 00:03:27.383 13:33:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.383 13:33:53 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:27.383 13:33:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.383 13:33:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.383 13:33:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.642 ************************************ 00:03:27.642 START TEST no_shrink_alloc 00:03:27.642 ************************************ 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.642 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.643 13:33:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:30.941 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.941 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74271520 kB' 'MemAvailable: 77620756 kB' 'Buffers: 2704 kB' 'Cached: 13366508 kB' 'SwapCached: 0 kB' 'Active: 10449580 kB' 'Inactive: 3465392 kB' 'Active(anon): 10012140 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548988 kB' 'Mapped: 195284 kB' 'Shmem: 9466380 kB' 'KReclaimable: 200096 kB' 'Slab: 609340 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409244 kB' 'KernelStack: 16944 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11421180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206112 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.941 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.942 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74272984 kB' 'MemAvailable: 77622220 kB' 'Buffers: 2704 kB' 'Cached: 13366512 kB' 'SwapCached: 0 kB' 'Active: 10448964 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011524 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548348 kB' 'Mapped: 195204 kB' 'Shmem: 9466384 kB' 'KReclaimable: 200096 kB' 'Slab: 609448 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409352 kB' 'KernelStack: 16944 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11421200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206128 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.943 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74272608 kB' 'MemAvailable: 77621844 kB' 'Buffers: 2704 kB' 'Cached: 13366528 kB' 'SwapCached: 0 kB' 'Active: 10448976 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011536 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548316 kB' 'Mapped: 195204 kB' 'Shmem: 9466400 kB' 'KReclaimable: 200096 kB' 'Slab: 609448 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409352 kB' 'KernelStack: 17104 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11421084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206256 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.944 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.945 nr_hugepages=1024 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.945 resv_hugepages=0 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.945 surplus_hugepages=0 00:03:30.945 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.945 anon_hugepages=0 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74271616 kB' 'MemAvailable: 77620852 kB' 'Buffers: 2704 kB' 'Cached: 13366532 kB' 'SwapCached: 0 kB' 'Active: 10448760 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011320 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548096 kB' 'Mapped: 195204 kB' 'Shmem: 9466404 kB' 'KReclaimable: 200096 kB' 'Slab: 609448 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409352 kB' 'KernelStack: 17056 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11421092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206224 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 42791212 kB' 'MemUsed: 5325768 kB' 'SwapCached: 0 kB' 'Active: 1738396 kB' 'Inactive: 216068 kB' 'Active(anon): 1476988 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789420 kB' 'Mapped: 69980 kB' 'AnonPages: 168188 kB' 'Shmem: 1311944 kB' 'KernelStack: 7944 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111828 kB' 'Slab: 350096 kB' 'SReclaimable: 111828 kB' 'SUnreclaim: 238268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.948 node0=1024 expecting 1024 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.948 13:33:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:34.244 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.244 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.244 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.244 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74249252 kB' 'MemAvailable: 77598488 kB' 'Buffers: 2704 kB' 'Cached: 13366636 kB' 'SwapCached: 0 kB' 'Active: 10449148 kB' 'Inactive: 3465392 kB' 'Active(anon): 10011708 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547856 kB' 'Mapped: 195320 kB' 'Shmem: 9466508 kB' 'KReclaimable: 200096 kB' 'Slab: 609472 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409376 kB' 'KernelStack: 16928 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11419084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206032 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.244 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.245 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74249096 kB' 'MemAvailable: 77598332 kB' 'Buffers: 2704 kB' 'Cached: 13366652 kB' 'SwapCached: 0 kB' 'Active: 10447868 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010428 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547060 kB' 'Mapped: 195204 kB' 'Shmem: 9466524 kB' 'KReclaimable: 200096 kB' 'Slab: 609444 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409348 kB' 'KernelStack: 16896 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11419104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206016 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.246 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.247 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74249444 kB' 'MemAvailable: 77598680 kB' 'Buffers: 2704 kB' 'Cached: 13366656 kB' 'SwapCached: 0 kB' 'Active: 10448104 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010664 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547316 kB' 'Mapped: 195204 kB' 'Shmem: 9466528 kB' 'KReclaimable: 200096 kB' 'Slab: 609440 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409344 kB' 'KernelStack: 16912 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11419124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.248 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.512 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.513 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.514 nr_hugepages=1024 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.514 resv_hugepages=0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.514 surplus_hugepages=0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.514 anon_hugepages=0 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 74250256 kB' 'MemAvailable: 77599492 kB' 'Buffers: 2704 kB' 'Cached: 13366656 kB' 'SwapCached: 0 kB' 'Active: 10448000 kB' 'Inactive: 3465392 kB' 'Active(anon): 10010560 kB' 'Inactive(anon): 0 kB' 'Active(file): 437440 kB' 'Inactive(file): 3465392 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547212 kB' 'Mapped: 195204 kB' 'Shmem: 9466528 kB' 'KReclaimable: 200096 kB' 'Slab: 609440 kB' 'SReclaimable: 200096 kB' 'SUnreclaim: 409344 kB' 'KernelStack: 16912 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 11419148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 206000 kB' 'VmallocChunk: 0 kB' 'Percpu: 56640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1533388 kB' 'DirectMap2M: 19113984 kB' 'DirectMap1G: 80740352 kB' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.514 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.516 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116980 kB' 'MemFree: 42779588 kB' 'MemUsed: 5337392 kB' 'SwapCached: 0 kB' 'Active: 1736500 kB' 'Inactive: 216068 kB' 'Active(anon): 1475092 kB' 'Inactive(anon): 0 kB' 'Active(file): 261408 kB' 'Inactive(file): 216068 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789504 kB' 'Mapped: 69980 kB' 'AnonPages: 166164 kB' 'Shmem: 1312028 kB' 'KernelStack: 7928 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111828 kB' 'Slab: 350080 kB' 'SReclaimable: 111828 kB' 'SUnreclaim: 238252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.519 node0=1024 expecting 1024 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.519 00:03:34.519 real 0m6.940s 00:03:34.519 user 0m2.666s 00:03:34.519 sys 0m4.458s 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.519 13:34:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.519 ************************************ 00:03:34.519 END TEST no_shrink_alloc 00:03:34.519 ************************************ 00:03:34.519 13:34:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:34.519 13:34:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:34.519 00:03:34.519 real 0m30.297s 00:03:34.519 user 0m9.803s 00:03:34.519 sys 0m16.318s 00:03:34.519 13:34:00 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.519 13:34:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.519 ************************************ 00:03:34.519 END TEST hugepages 00:03:34.519 ************************************ 00:03:34.519 13:34:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:34.519 13:34:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:34.519 13:34:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.519 13:34:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.519 13:34:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.519 ************************************ 00:03:34.519 START TEST driver 00:03:34.519 ************************************ 00:03:34.519 13:34:00 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:34.778 * Looking for test storage... 00:03:34.778 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:34.778 13:34:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.778 13:34:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.778 13:34:01 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.049 13:34:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:40.049 13:34:05 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.049 13:34:05 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.049 13:34:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.049 ************************************ 00:03:40.049 START TEST guess_driver 00:03:40.050 ************************************ 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 163 > 0 )) 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:40.050 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:40.050 Looking for driver=vfio-pci 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.050 13:34:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.589 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.848 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.848 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.848 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.848 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.848 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.849 13:34:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.127 13:34:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.476 00:03:53.476 real 0m13.116s 00:03:53.476 user 0m2.603s 00:03:53.476 sys 0m4.981s 00:03:53.476 13:34:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.476 13:34:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.476 ************************************ 00:03:53.476 END TEST guess_driver 00:03:53.476 ************************************ 00:03:53.476 13:34:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:53.476 00:03:53.476 real 0m18.026s 00:03:53.476 user 0m4.030s 00:03:53.476 sys 0m7.677s 00:03:53.476 13:34:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.476 13:34:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.476 ************************************ 00:03:53.476 END TEST driver 00:03:53.476 ************************************ 00:03:53.476 13:34:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.476 13:34:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:53.477 13:34:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.477 13:34:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.477 13:34:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.477 ************************************ 00:03:53.477 START TEST devices 00:03:53.477 ************************************ 00:03:53.477 13:34:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:03:53.477 * Looking for test storage... 00:03:53.477 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:53.477 13:34:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.477 13:34:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:53.477 13:34:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.477 13:34:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.768 13:34:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:56.768 13:34:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.768 13:34:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:56.768 13:34:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.768 No valid GPT data, bailing 00:03:56.768 13:34:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.768 13:34:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:56.768 13:34:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.769 13:34:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.769 13:34:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.769 13:34:22 setup.sh.devices -- setup/common.sh@80 -- # echo 8001563222016 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 8001563222016 >= min_disk_size )) 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.769 13:34:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.769 13:34:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.769 13:34:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.769 13:34:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.769 ************************************ 00:03:56.769 START TEST nvme_mount 00:03:56.769 ************************************ 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.769 13:34:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.705 Creating new GPT entries in memory. 00:03:57.705 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.705 other utilities. 00:03:57.705 13:34:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.705 13:34:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.705 13:34:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.705 13:34:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.705 13:34:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.641 Creating new GPT entries in memory. 00:03:58.641 The operation has completed successfully. 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2327863 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.641 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.900 13:34:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.189 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:02.190 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.190 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:02.450 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:02.450 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:04:02.450 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:02.450 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.450 13:34:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:05.739 13:34:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.739 13:34:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:09.024 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.025 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.025 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.025 13:34:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.025 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.025 00:04:09.025 real 0m12.451s 00:04:09.025 user 0m3.654s 00:04:09.025 sys 0m6.720s 00:04:09.025 13:34:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.025 13:34:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.025 ************************************ 00:04:09.025 END TEST nvme_mount 00:04:09.025 ************************************ 00:04:09.025 13:34:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:09.025 13:34:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:09.025 13:34:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.025 13:34:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.025 13:34:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.025 ************************************ 00:04:09.025 START TEST dm_mount 00:04:09.284 ************************************ 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.284 13:34:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:10.240 Creating new GPT entries in memory. 00:04:10.240 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.240 other utilities. 00:04:10.240 13:34:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.240 13:34:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.240 13:34:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.240 13:34:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.240 13:34:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:11.179 Creating new GPT entries in memory. 00:04:11.179 The operation has completed successfully. 00:04:11.179 13:34:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.179 13:34:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.179 13:34:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.179 13:34:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.179 13:34:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:12.116 The operation has completed successfully. 00:04:12.117 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.117 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.117 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2331676 00:04:12.376 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:12.376 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.376 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.377 13:34:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.667 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:15.668 13:34:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.668 13:34:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.964 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:18.965 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.965 13:34:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:19.224 00:04:19.224 real 0m9.949s 00:04:19.224 user 0m2.457s 00:04:19.224 sys 0m4.579s 00:04:19.224 13:34:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.224 13:34:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:19.224 ************************************ 00:04:19.224 END TEST dm_mount 00:04:19.224 ************************************ 00:04:19.224 13:34:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.224 13:34:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:19.484 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:19.484 /dev/nvme0n1: 8 bytes were erased at offset 0x74702555e00 (gpt): 45 46 49 20 50 41 52 54 00:04:19.484 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:19.484 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.484 13:34:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:19.484 00:04:19.484 real 0m26.731s 00:04:19.484 user 0m7.611s 00:04:19.484 sys 0m14.058s 00:04:19.484 13:34:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.484 13:34:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.484 ************************************ 00:04:19.484 END TEST devices 00:04:19.484 ************************************ 00:04:19.484 13:34:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:19.484 00:04:19.484 real 1m43.415s 00:04:19.484 user 0m29.599s 00:04:19.484 sys 0m53.033s 00:04:19.484 13:34:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.484 13:34:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.484 ************************************ 00:04:19.484 END TEST setup.sh 00:04:19.484 ************************************ 00:04:19.484 13:34:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.484 13:34:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:22.774 Hugepages 00:04:22.774 node hugesize free / total 00:04:22.774 node0 1048576kB 0 / 0 00:04:22.774 node0 2048kB 2048 / 2048 00:04:22.774 node1 1048576kB 0 / 0 00:04:22.774 node1 2048kB 0 / 0 00:04:22.774 00:04:22.774 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.774 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:22.774 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:22.774 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:22.774 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:22.775 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:22.775 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:22.775 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:22.775 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:22.775 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:22.775 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:22.775 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:22.775 13:34:49 -- spdk/autotest.sh@130 -- # uname -s 00:04:23.034 13:34:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:23.034 13:34:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:23.034 13:34:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:26.440 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:26.440 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.713 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.713 13:34:57 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:32.282 13:34:58 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:32.282 13:34:58 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:32.282 13:34:58 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.282 13:34:58 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:32.282 13:34:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:32.282 13:34:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:32.282 13:34:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.282 13:34:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.282 13:34:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.282 13:34:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:32.282 13:34:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:32.282 13:34:58 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.571 Waiting for block devices as requested 00:04:35.571 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.571 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.830 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.830 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.830 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.090 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.090 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.090 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.349 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.349 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.349 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.608 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.608 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.608 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.873 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.873 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.873 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.136 13:35:03 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:37.136 13:35:03 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1502 -- # grep 0000:5f:00.0/nvme/nvme 00:04:37.136 13:35:03 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:37.136 13:35:03 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:37.136 13:35:03 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:37.136 13:35:03 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:37.136 13:35:03 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:37.136 13:35:03 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:37.136 13:35:03 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:37.136 13:35:03 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:37.136 13:35:03 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:37.136 13:35:03 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:37.136 13:35:03 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:37.136 13:35:03 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:37.136 13:35:03 -- common/autotest_common.sh@1557 -- # continue 00:04:37.136 13:35:03 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:37.136 13:35:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.136 13:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.136 13:35:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:37.136 13:35:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.136 13:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.136 13:35:03 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:40.423 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.424 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.696 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.696 13:35:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:45.696 13:35:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.696 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.696 13:35:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:45.696 13:35:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:45.696 13:35:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.696 13:35:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:45.696 13:35:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:45.696 13:35:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:45.696 13:35:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:45.696 13:35:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:45.696 13:35:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.696 13:35:11 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.696 13:35:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:45.696 13:35:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:45.696 13:35:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:45.696 13:35:12 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:45.696 13:35:12 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:45.696 13:35:12 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:45.696 13:35:12 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:45.696 13:35:12 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:45.696 13:35:12 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5f:00.0 00:04:45.696 13:35:12 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5f:00.0 ]] 00:04:45.696 13:35:12 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2340029 00:04:45.696 13:35:12 -- common/autotest_common.sh@1598 -- # waitforlisten 2340029 00:04:45.696 13:35:12 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.696 13:35:12 -- common/autotest_common.sh@829 -- # '[' -z 2340029 ']' 00:04:45.696 13:35:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.696 13:35:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.696 13:35:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.696 13:35:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.696 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.696 [2024-07-15 13:35:12.147461] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:04:45.696 [2024-07-15 13:35:12.147530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340029 ] 00:04:45.696 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.954 [2024-07-15 13:35:12.233374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.954 [2024-07-15 13:35:12.323412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.519 13:35:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.519 13:35:12 -- common/autotest_common.sh@862 -- # return 0 00:04:46.519 13:35:12 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:46.519 13:35:12 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:46.519 13:35:12 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:04:49.808 nvme0n1 00:04:49.808 13:35:15 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:49.808 [2024-07-15 13:35:16.114275] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:49.808 request: 00:04:49.808 { 00:04:49.808 "nvme_ctrlr_name": "nvme0", 00:04:49.808 "password": "test", 00:04:49.808 "method": "bdev_nvme_opal_revert", 00:04:49.808 "req_id": 1 00:04:49.808 } 00:04:49.808 Got JSON-RPC error response 00:04:49.808 response: 00:04:49.808 { 00:04:49.808 "code": -32602, 00:04:49.808 "message": "Invalid parameters" 00:04:49.808 } 00:04:49.808 13:35:16 -- common/autotest_common.sh@1604 -- # true 00:04:49.808 13:35:16 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:49.808 13:35:16 -- common/autotest_common.sh@1608 -- # killprocess 2340029 00:04:49.808 13:35:16 -- common/autotest_common.sh@948 -- # '[' -z 2340029 ']' 00:04:49.808 13:35:16 -- common/autotest_common.sh@952 -- # kill -0 2340029 00:04:49.808 13:35:16 -- common/autotest_common.sh@953 -- # uname 00:04:49.808 13:35:16 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.808 13:35:16 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2340029 00:04:49.808 13:35:16 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.808 13:35:16 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.808 13:35:16 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2340029' 00:04:49.808 killing process with pid 2340029 00:04:49.808 13:35:16 -- common/autotest_common.sh@967 -- # kill 2340029 00:04:49.808 13:35:16 -- common/autotest_common.sh@972 -- # wait 2340029 00:04:57.927 13:35:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:57.927 13:35:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:57.927 13:35:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.927 13:35:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:57.927 13:35:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:57.927 13:35:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.927 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.927 13:35:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:57.927 13:35:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:57.927 13:35:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.927 13:35:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.927 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.927 ************************************ 00:04:57.927 START TEST env 00:04:57.927 ************************************ 00:04:57.927 13:35:23 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:57.927 * Looking for test storage... 00:04:57.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:57.927 13:35:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:57.927 13:35:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.927 13:35:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.927 13:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.927 ************************************ 00:04:57.927 START TEST env_memory 00:04:57.927 ************************************ 00:04:57.927 13:35:23 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:57.927 00:04:57.927 00:04:57.927 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.927 http://cunit.sourceforge.net/ 00:04:57.927 00:04:57.927 00:04:57.927 Suite: memory 00:04:57.927 Test: alloc and free memory map ...[2024-07-15 13:35:23.644100] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:57.927 passed 00:04:57.927 Test: mem map translation ...[2024-07-15 13:35:23.662574] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:57.928 [2024-07-15 13:35:23.662594] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:57.928 [2024-07-15 13:35:23.662630] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:57.928 [2024-07-15 13:35:23.662641] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:57.928 passed 00:04:57.928 Test: mem map registration ...[2024-07-15 13:35:23.699182] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:57.928 [2024-07-15 13:35:23.699205] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:57.928 passed 00:04:57.928 Test: mem map adjacent registrations ...passed 00:04:57.928 00:04:57.928 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.928 suites 1 1 n/a 0 0 00:04:57.928 tests 4 4 4 0 0 00:04:57.928 asserts 152 152 152 0 n/a 00:04:57.928 00:04:57.928 Elapsed time = 0.137 seconds 00:04:57.928 00:04:57.928 real 0m0.151s 00:04:57.928 user 0m0.142s 00:04:57.928 sys 0m0.008s 00:04:57.928 13:35:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.928 13:35:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:57.928 ************************************ 00:04:57.928 END TEST env_memory 00:04:57.928 ************************************ 00:04:57.928 13:35:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.928 13:35:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:57.928 13:35:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.928 13:35:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.928 13:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.928 ************************************ 00:04:57.928 START TEST env_vtophys 00:04:57.928 ************************************ 00:04:57.928 13:35:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:57.928 EAL: lib.eal log level changed from notice to debug 00:04:57.928 EAL: Detected lcore 0 as core 0 on socket 0 00:04:57.928 EAL: Detected lcore 1 as core 1 on socket 0 00:04:57.928 EAL: Detected lcore 2 as core 2 on socket 0 00:04:57.928 EAL: Detected lcore 3 as core 3 on socket 0 00:04:57.928 EAL: Detected lcore 4 as core 4 on socket 0 00:04:57.928 EAL: Detected lcore 5 as core 8 on socket 0 00:04:57.928 EAL: Detected lcore 6 as core 9 on socket 0 00:04:57.928 EAL: Detected lcore 7 as core 10 on socket 0 00:04:57.928 EAL: Detected lcore 8 as core 11 on socket 0 00:04:57.928 EAL: Detected lcore 9 as core 16 on socket 0 00:04:57.928 EAL: Detected lcore 10 as core 17 on socket 0 00:04:57.928 EAL: Detected lcore 11 as core 18 on socket 0 00:04:57.928 EAL: Detected lcore 12 as core 19 on socket 0 00:04:57.928 EAL: Detected lcore 13 as core 20 on socket 0 00:04:57.928 EAL: Detected lcore 14 as core 24 on socket 0 00:04:57.928 EAL: Detected lcore 15 as core 25 on socket 0 00:04:57.928 EAL: Detected lcore 16 as core 26 on socket 0 00:04:57.928 EAL: Detected lcore 17 as core 27 on socket 0 00:04:57.928 EAL: Detected lcore 18 as core 0 on socket 1 00:04:57.928 EAL: Detected lcore 19 as core 1 on socket 1 00:04:57.928 EAL: Detected lcore 20 as core 2 on socket 1 00:04:57.928 EAL: Detected lcore 21 as core 3 on socket 1 00:04:57.928 EAL: Detected lcore 22 as core 4 on socket 1 00:04:57.928 EAL: Detected lcore 23 as core 8 on socket 1 00:04:57.928 EAL: Detected lcore 24 as core 9 on socket 1 00:04:57.928 EAL: Detected lcore 25 as core 10 on socket 1 00:04:57.928 EAL: Detected lcore 26 as core 11 on socket 1 00:04:57.928 EAL: Detected lcore 27 as core 16 on socket 1 00:04:57.928 EAL: Detected lcore 28 as core 17 on socket 1 00:04:57.928 EAL: Detected lcore 29 as core 18 on socket 1 00:04:57.928 EAL: Detected lcore 30 as core 19 on socket 1 00:04:57.928 EAL: Detected lcore 31 as core 20 on socket 1 00:04:57.928 EAL: Detected lcore 32 as core 24 on socket 1 00:04:57.928 EAL: Detected lcore 33 as core 25 on socket 1 00:04:57.928 EAL: Detected lcore 34 as core 26 on socket 1 00:04:57.928 EAL: Detected lcore 35 as core 27 on socket 1 00:04:57.928 EAL: Detected lcore 36 as core 0 on socket 0 00:04:57.928 EAL: Detected lcore 37 as core 1 on socket 0 00:04:57.928 EAL: Detected lcore 38 as core 2 on socket 0 00:04:57.928 EAL: Detected lcore 39 as core 3 on socket 0 00:04:57.928 EAL: Detected lcore 40 as core 4 on socket 0 00:04:57.928 EAL: Detected lcore 41 as core 8 on socket 0 00:04:57.928 EAL: Detected lcore 42 as core 9 on socket 0 00:04:57.928 EAL: Detected lcore 43 as core 10 on socket 0 00:04:57.928 EAL: Detected lcore 44 as core 11 on socket 0 00:04:57.928 EAL: Detected lcore 45 as core 16 on socket 0 00:04:57.928 EAL: Detected lcore 46 as core 17 on socket 0 00:04:57.928 EAL: Detected lcore 47 as core 18 on socket 0 00:04:57.928 EAL: Detected lcore 48 as core 19 on socket 0 00:04:57.928 EAL: Detected lcore 49 as core 20 on socket 0 00:04:57.928 EAL: Detected lcore 50 as core 24 on socket 0 00:04:57.928 EAL: Detected lcore 51 as core 25 on socket 0 00:04:57.928 EAL: Detected lcore 52 as core 26 on socket 0 00:04:57.928 EAL: Detected lcore 53 as core 27 on socket 0 00:04:57.928 EAL: Detected lcore 54 as core 0 on socket 1 00:04:57.928 EAL: Detected lcore 55 as core 1 on socket 1 00:04:57.928 EAL: Detected lcore 56 as core 2 on socket 1 00:04:57.928 EAL: Detected lcore 57 as core 3 on socket 1 00:04:57.928 EAL: Detected lcore 58 as core 4 on socket 1 00:04:57.928 EAL: Detected lcore 59 as core 8 on socket 1 00:04:57.928 EAL: Detected lcore 60 as core 9 on socket 1 00:04:57.928 EAL: Detected lcore 61 as core 10 on socket 1 00:04:57.928 EAL: Detected lcore 62 as core 11 on socket 1 00:04:57.928 EAL: Detected lcore 63 as core 16 on socket 1 00:04:57.928 EAL: Detected lcore 64 as core 17 on socket 1 00:04:57.928 EAL: Detected lcore 65 as core 18 on socket 1 00:04:57.928 EAL: Detected lcore 66 as core 19 on socket 1 00:04:57.928 EAL: Detected lcore 67 as core 20 on socket 1 00:04:57.928 EAL: Detected lcore 68 as core 24 on socket 1 00:04:57.928 EAL: Detected lcore 69 as core 25 on socket 1 00:04:57.928 EAL: Detected lcore 70 as core 26 on socket 1 00:04:57.928 EAL: Detected lcore 71 as core 27 on socket 1 00:04:57.928 EAL: Maximum logical cores by configuration: 128 00:04:57.928 EAL: Detected CPU lcores: 72 00:04:57.928 EAL: Detected NUMA nodes: 2 00:04:57.928 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:57.928 EAL: Detected shared linkage of DPDK 00:04:57.928 EAL: No shared files mode enabled, IPC will be disabled 00:04:57.928 EAL: Bus pci wants IOVA as 'DC' 00:04:57.928 EAL: Buses did not request a specific IOVA mode. 00:04:57.928 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:57.928 EAL: Selected IOVA mode 'VA' 00:04:57.929 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.929 EAL: Probing VFIO support... 00:04:57.929 EAL: IOMMU type 1 (Type 1) is supported 00:04:57.929 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:57.929 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:57.929 EAL: VFIO support initialized 00:04:57.929 EAL: Ask a virtual area of 0x2e000 bytes 00:04:57.929 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:57.929 EAL: Setting up physically contiguous memory... 00:04:57.929 EAL: Setting maximum number of open files to 524288 00:04:57.929 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:57.929 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:57.929 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:57.929 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:57.929 EAL: Ask a virtual area of 0x61000 bytes 00:04:57.929 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:57.929 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:57.929 EAL: Ask a virtual area of 0x400000000 bytes 00:04:57.929 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:57.929 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:57.929 EAL: Hugepages will be freed exactly as allocated. 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: TSC frequency is ~2300000 KHz 00:04:57.929 EAL: Main lcore 0 is ready (tid=7f194030ba00;cpuset=[0]) 00:04:57.929 EAL: Trying to obtain current memory policy. 00:04:57.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.929 EAL: Restoring previous memory policy: 0 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was expanded by 2MB 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:57.929 EAL: Mem event callback 'spdk:(nil)' registered 00:04:57.929 00:04:57.929 00:04:57.929 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.929 http://cunit.sourceforge.net/ 00:04:57.929 00:04:57.929 00:04:57.929 Suite: components_suite 00:04:57.929 Test: vtophys_malloc_test ...passed 00:04:57.929 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:57.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.929 EAL: Restoring previous memory policy: 4 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was expanded by 4MB 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was shrunk by 4MB 00:04:57.929 EAL: Trying to obtain current memory policy. 00:04:57.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.929 EAL: Restoring previous memory policy: 4 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was expanded by 6MB 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was shrunk by 6MB 00:04:57.929 EAL: Trying to obtain current memory policy. 00:04:57.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.929 EAL: Restoring previous memory policy: 4 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was expanded by 10MB 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.929 EAL: request: mp_malloc_sync 00:04:57.929 EAL: No shared files mode enabled, IPC is disabled 00:04:57.929 EAL: Heap on socket 0 was shrunk by 10MB 00:04:57.929 EAL: Trying to obtain current memory policy. 00:04:57.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.929 EAL: Restoring previous memory policy: 4 00:04:57.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 18MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was shrunk by 18MB 00:04:57.930 EAL: Trying to obtain current memory policy. 00:04:57.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.930 EAL: Restoring previous memory policy: 4 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 34MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was shrunk by 34MB 00:04:57.930 EAL: Trying to obtain current memory policy. 00:04:57.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.930 EAL: Restoring previous memory policy: 4 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.930 EAL: Trying to obtain current memory policy. 00:04:57.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.930 EAL: Restoring previous memory policy: 4 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.930 EAL: Trying to obtain current memory policy. 00:04:57.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.930 EAL: Restoring previous memory policy: 4 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was shrunk by 258MB 00:04:57.930 EAL: Trying to obtain current memory policy. 00:04:57.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.930 EAL: Restoring previous memory policy: 4 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.930 EAL: request: mp_malloc_sync 00:04:57.930 EAL: No shared files mode enabled, IPC is disabled 00:04:57.930 EAL: Heap on socket 0 was expanded by 514MB 00:04:57.930 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.189 EAL: request: mp_malloc_sync 00:04:58.189 EAL: No shared files mode enabled, IPC is disabled 00:04:58.189 EAL: Heap on socket 0 was shrunk by 514MB 00:04:58.189 EAL: Trying to obtain current memory policy. 00:04:58.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.448 EAL: Restoring previous memory policy: 4 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.448 EAL: request: mp_malloc_sync 00:04:58.448 EAL: No shared files mode enabled, IPC is disabled 00:04:58.448 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.707 EAL: request: mp_malloc_sync 00:04:58.707 EAL: No shared files mode enabled, IPC is disabled 00:04:58.707 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:58.707 passed 00:04:58.707 00:04:58.707 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.707 suites 1 1 n/a 0 0 00:04:58.707 tests 2 2 2 0 0 00:04:58.707 asserts 497 497 497 0 n/a 00:04:58.707 00:04:58.707 Elapsed time = 1.125 seconds 00:04:58.707 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.707 EAL: request: mp_malloc_sync 00:04:58.707 EAL: No shared files mode enabled, IPC is disabled 00:04:58.707 EAL: Heap on socket 0 was shrunk by 2MB 00:04:58.707 EAL: No shared files mode enabled, IPC is disabled 00:04:58.707 EAL: No shared files mode enabled, IPC is disabled 00:04:58.707 EAL: No shared files mode enabled, IPC is disabled 00:04:58.707 00:04:58.707 real 0m1.279s 00:04:58.707 user 0m0.741s 00:04:58.707 sys 0m0.502s 00:04:58.707 13:35:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.707 13:35:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 ************************************ 00:04:58.707 END TEST env_vtophys 00:04:58.707 ************************************ 00:04:58.707 13:35:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.707 13:35:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.707 13:35:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.707 13:35:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.707 13:35:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.707 ************************************ 00:04:58.707 START TEST env_pci 00:04:58.707 ************************************ 00:04:58.707 13:35:25 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:58.707 00:04:58.707 00:04:58.708 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.708 http://cunit.sourceforge.net/ 00:04:58.708 00:04:58.708 00:04:58.708 Suite: pci 00:04:58.708 Test: pci_hook ...[2024-07-15 13:35:25.221225] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2341775 has claimed it 00:04:58.967 EAL: Cannot find device (10000:00:01.0) 00:04:58.967 EAL: Failed to attach device on primary process 00:04:58.967 passed 00:04:58.967 00:04:58.967 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.967 suites 1 1 n/a 0 0 00:04:58.967 tests 1 1 1 0 0 00:04:58.967 asserts 25 25 25 0 n/a 00:04:58.967 00:04:58.967 Elapsed time = 0.032 seconds 00:04:58.967 00:04:58.967 real 0m0.054s 00:04:58.967 user 0m0.013s 00:04:58.967 sys 0m0.041s 00:04:58.967 13:35:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.967 13:35:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:58.967 ************************************ 00:04:58.967 END TEST env_pci 00:04:58.967 ************************************ 00:04:58.967 13:35:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:58.967 13:35:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:58.967 13:35:25 env -- env/env.sh@15 -- # uname 00:04:58.967 13:35:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:58.967 13:35:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:58.967 13:35:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.967 13:35:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:58.967 13:35:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.967 13:35:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.967 ************************************ 00:04:58.967 START TEST env_dpdk_post_init 00:04:58.967 ************************************ 00:04:58.967 13:35:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:58.967 EAL: Detected CPU lcores: 72 00:04:58.967 EAL: Detected NUMA nodes: 2 00:04:58.967 EAL: Detected shared linkage of DPDK 00:04:58.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.967 EAL: Selected IOVA mode 'VA' 00:04:58.967 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.967 EAL: VFIO support initialized 00:04:58.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.259 EAL: Using IOMMU type 1 (Type 1) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:59.259 EAL: Ignore mapping IO port bar(1) 00:04:59.259 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:59.858 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:59.858 EAL: Ignore mapping IO port bar(1) 00:04:59.858 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:59.858 EAL: Ignore mapping IO port bar(1) 00:04:59.858 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:59.858 EAL: Ignore mapping IO port bar(1) 00:04:59.858 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:00.117 EAL: Ignore mapping IO port bar(1) 00:05:00.117 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:00.117 EAL: Ignore mapping IO port bar(1) 00:05:00.117 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:00.117 EAL: Ignore mapping IO port bar(1) 00:05:00.117 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:00.117 EAL: Ignore mapping IO port bar(1) 00:05:00.117 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:00.117 EAL: Ignore mapping IO port bar(1) 00:05:00.117 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:10.097 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:10.097 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:10.097 Starting DPDK initialization... 00:05:10.097 Starting SPDK post initialization... 00:05:10.097 SPDK NVMe probe 00:05:10.097 Attaching to 0000:5f:00.0 00:05:10.097 Attached to 0000:5f:00.0 00:05:10.097 Cleaning up... 00:05:10.097 00:05:10.097 real 0m9.984s 00:05:10.097 user 0m7.772s 00:05:10.097 sys 0m1.259s 00:05:10.097 13:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.097 13:35:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 END TEST env_dpdk_post_init 00:05:10.097 ************************************ 00:05:10.097 13:35:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.097 13:35:35 env -- env/env.sh@26 -- # uname 00:05:10.097 13:35:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.097 13:35:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.097 13:35:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.097 13:35:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.097 13:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 START TEST env_mem_callbacks 00:05:10.097 ************************************ 00:05:10.097 13:35:35 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.097 EAL: Detected CPU lcores: 72 00:05:10.097 EAL: Detected NUMA nodes: 2 00:05:10.097 EAL: Detected shared linkage of DPDK 00:05:10.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.097 EAL: Selected IOVA mode 'VA' 00:05:10.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.097 EAL: VFIO support initialized 00:05:10.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.097 00:05:10.097 00:05:10.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.097 http://cunit.sourceforge.net/ 00:05:10.097 00:05:10.097 00:05:10.097 Suite: memory 00:05:10.097 Test: test ... 00:05:10.097 register 0x200000200000 2097152 00:05:10.097 malloc 3145728 00:05:10.097 register 0x200000400000 4194304 00:05:10.097 buf 0x200000500000 len 3145728 PASSED 00:05:10.097 malloc 64 00:05:10.097 buf 0x2000004fff40 len 64 PASSED 00:05:10.097 malloc 4194304 00:05:10.097 register 0x200000800000 6291456 00:05:10.097 buf 0x200000a00000 len 4194304 PASSED 00:05:10.097 free 0x200000500000 3145728 00:05:10.097 free 0x2000004fff40 64 00:05:10.097 unregister 0x200000400000 4194304 PASSED 00:05:10.097 free 0x200000a00000 4194304 00:05:10.097 unregister 0x200000800000 6291456 PASSED 00:05:10.097 malloc 8388608 00:05:10.097 register 0x200000400000 10485760 00:05:10.097 buf 0x200000600000 len 8388608 PASSED 00:05:10.097 free 0x200000600000 8388608 00:05:10.097 unregister 0x200000400000 10485760 PASSED 00:05:10.097 passed 00:05:10.097 00:05:10.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.097 suites 1 1 n/a 0 0 00:05:10.097 tests 1 1 1 0 0 00:05:10.097 asserts 15 15 15 0 n/a 00:05:10.097 00:05:10.097 Elapsed time = 0.008 seconds 00:05:10.097 00:05:10.097 real 0m0.072s 00:05:10.097 user 0m0.022s 00:05:10.097 sys 0m0.049s 00:05:10.097 13:35:35 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.097 13:35:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 END TEST env_mem_callbacks 00:05:10.097 ************************************ 00:05:10.097 13:35:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.097 00:05:10.097 real 0m12.085s 00:05:10.097 user 0m8.891s 00:05:10.097 sys 0m2.245s 00:05:10.097 13:35:35 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.097 13:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 END TEST env 00:05:10.097 ************************************ 00:05:10.097 13:35:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.097 13:35:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.097 13:35:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.097 13:35:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.097 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 ************************************ 00:05:10.097 START TEST rpc 00:05:10.097 ************************************ 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.097 * Looking for test storage... 00:05:10.097 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:10.097 13:35:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2343361 00:05:10.097 13:35:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.097 13:35:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.097 13:35:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2343361 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 2343361 ']' 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.097 13:35:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.097 [2024-07-15 13:35:35.785013] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:10.097 [2024-07-15 13:35:35.785080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343361 ] 00:05:10.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.097 [2024-07-15 13:35:35.871346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.097 [2024-07-15 13:35:35.962624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.097 [2024-07-15 13:35:35.962673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2343361' to capture a snapshot of events at runtime. 00:05:10.097 [2024-07-15 13:35:35.962682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.097 [2024-07-15 13:35:35.962690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.097 [2024-07-15 13:35:35.962697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2343361 for offline analysis/debug. 00:05:10.097 [2024-07-15 13:35:35.962723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.097 13:35:36 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.097 13:35:36 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:10.097 13:35:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:10.097 13:35:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:10.097 13:35:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.097 13:35:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.097 13:35:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.097 13:35:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.097 13:35:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 ************************************ 00:05:10.357 START TEST rpc_integrity 00:05:10.357 ************************************ 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.357 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.357 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.357 { 00:05:10.357 "name": "Malloc0", 00:05:10.357 "aliases": [ 00:05:10.357 "43132585-461a-4e8a-9a35-ebce237e13d5" 00:05:10.357 ], 00:05:10.357 "product_name": "Malloc disk", 00:05:10.357 "block_size": 512, 00:05:10.357 "num_blocks": 16384, 00:05:10.357 "uuid": "43132585-461a-4e8a-9a35-ebce237e13d5", 00:05:10.357 "assigned_rate_limits": { 00:05:10.357 "rw_ios_per_sec": 0, 00:05:10.357 "rw_mbytes_per_sec": 0, 00:05:10.357 "r_mbytes_per_sec": 0, 00:05:10.357 "w_mbytes_per_sec": 0 00:05:10.357 }, 00:05:10.357 "claimed": false, 00:05:10.357 "zoned": false, 00:05:10.357 "supported_io_types": { 00:05:10.357 "read": true, 00:05:10.357 "write": true, 00:05:10.357 "unmap": true, 00:05:10.357 "flush": true, 00:05:10.357 "reset": true, 00:05:10.357 "nvme_admin": false, 00:05:10.357 "nvme_io": false, 00:05:10.357 "nvme_io_md": false, 00:05:10.357 "write_zeroes": true, 00:05:10.357 "zcopy": true, 00:05:10.357 "get_zone_info": false, 00:05:10.357 "zone_management": false, 00:05:10.357 "zone_append": false, 00:05:10.357 "compare": false, 00:05:10.357 "compare_and_write": false, 00:05:10.358 "abort": true, 00:05:10.358 "seek_hole": false, 00:05:10.358 "seek_data": false, 00:05:10.358 "copy": true, 00:05:10.358 "nvme_iov_md": false 00:05:10.358 }, 00:05:10.358 "memory_domains": [ 00:05:10.358 { 00:05:10.358 "dma_device_id": "system", 00:05:10.358 "dma_device_type": 1 00:05:10.358 }, 00:05:10.358 { 00:05:10.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.358 "dma_device_type": 2 00:05:10.358 } 00:05:10.358 ], 00:05:10.358 "driver_specific": {} 00:05:10.358 } 00:05:10.358 ]' 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.358 [2024-07-15 13:35:36.766440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:10.358 [2024-07-15 13:35:36.766474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.358 [2024-07-15 13:35:36.766490] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b58480 00:05:10.358 [2024-07-15 13:35:36.766499] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.358 [2024-07-15 13:35:36.767558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.358 [2024-07-15 13:35:36.767591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.358 Passthru0 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.358 { 00:05:10.358 "name": "Malloc0", 00:05:10.358 "aliases": [ 00:05:10.358 "43132585-461a-4e8a-9a35-ebce237e13d5" 00:05:10.358 ], 00:05:10.358 "product_name": "Malloc disk", 00:05:10.358 "block_size": 512, 00:05:10.358 "num_blocks": 16384, 00:05:10.358 "uuid": "43132585-461a-4e8a-9a35-ebce237e13d5", 00:05:10.358 "assigned_rate_limits": { 00:05:10.358 "rw_ios_per_sec": 0, 00:05:10.358 "rw_mbytes_per_sec": 0, 00:05:10.358 "r_mbytes_per_sec": 0, 00:05:10.358 "w_mbytes_per_sec": 0 00:05:10.358 }, 00:05:10.358 "claimed": true, 00:05:10.358 "claim_type": "exclusive_write", 00:05:10.358 "zoned": false, 00:05:10.358 "supported_io_types": { 00:05:10.358 "read": true, 00:05:10.358 "write": true, 00:05:10.358 "unmap": true, 00:05:10.358 "flush": true, 00:05:10.358 "reset": true, 00:05:10.358 "nvme_admin": false, 00:05:10.358 "nvme_io": false, 00:05:10.358 "nvme_io_md": false, 00:05:10.358 "write_zeroes": true, 00:05:10.358 "zcopy": true, 00:05:10.358 "get_zone_info": false, 00:05:10.358 "zone_management": false, 00:05:10.358 "zone_append": false, 00:05:10.358 "compare": false, 00:05:10.358 "compare_and_write": false, 00:05:10.358 "abort": true, 00:05:10.358 "seek_hole": false, 00:05:10.358 "seek_data": false, 00:05:10.358 "copy": true, 00:05:10.358 "nvme_iov_md": false 00:05:10.358 }, 00:05:10.358 "memory_domains": [ 00:05:10.358 { 00:05:10.358 "dma_device_id": "system", 00:05:10.358 "dma_device_type": 1 00:05:10.358 }, 00:05:10.358 { 00:05:10.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.358 "dma_device_type": 2 00:05:10.358 } 00:05:10.358 ], 00:05:10.358 "driver_specific": {} 00:05:10.358 }, 00:05:10.358 { 00:05:10.358 "name": "Passthru0", 00:05:10.358 "aliases": [ 00:05:10.358 "b7cca107-2493-5102-afe1-8a880fec9cfa" 00:05:10.358 ], 00:05:10.358 "product_name": "passthru", 00:05:10.358 "block_size": 512, 00:05:10.358 "num_blocks": 16384, 00:05:10.358 "uuid": "b7cca107-2493-5102-afe1-8a880fec9cfa", 00:05:10.358 "assigned_rate_limits": { 00:05:10.358 "rw_ios_per_sec": 0, 00:05:10.358 "rw_mbytes_per_sec": 0, 00:05:10.358 "r_mbytes_per_sec": 0, 00:05:10.358 "w_mbytes_per_sec": 0 00:05:10.358 }, 00:05:10.358 "claimed": false, 00:05:10.358 "zoned": false, 00:05:10.358 "supported_io_types": { 00:05:10.358 "read": true, 00:05:10.358 "write": true, 00:05:10.358 "unmap": true, 00:05:10.358 "flush": true, 00:05:10.358 "reset": true, 00:05:10.358 "nvme_admin": false, 00:05:10.358 "nvme_io": false, 00:05:10.358 "nvme_io_md": false, 00:05:10.358 "write_zeroes": true, 00:05:10.358 "zcopy": true, 00:05:10.358 "get_zone_info": false, 00:05:10.358 "zone_management": false, 00:05:10.358 "zone_append": false, 00:05:10.358 "compare": false, 00:05:10.358 "compare_and_write": false, 00:05:10.358 "abort": true, 00:05:10.358 "seek_hole": false, 00:05:10.358 "seek_data": false, 00:05:10.358 "copy": true, 00:05:10.358 "nvme_iov_md": false 00:05:10.358 }, 00:05:10.358 "memory_domains": [ 00:05:10.358 { 00:05:10.358 "dma_device_id": "system", 00:05:10.358 "dma_device_type": 1 00:05:10.358 }, 00:05:10.358 { 00:05:10.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.358 "dma_device_type": 2 00:05:10.358 } 00:05:10.358 ], 00:05:10.358 "driver_specific": { 00:05:10.358 "passthru": { 00:05:10.358 "name": "Passthru0", 00:05:10.358 "base_bdev_name": "Malloc0" 00:05:10.358 } 00:05:10.358 } 00:05:10.358 } 00:05:10.358 ]' 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.358 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.358 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.617 13:35:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.617 00:05:10.617 real 0m0.284s 00:05:10.617 user 0m0.164s 00:05:10.617 sys 0m0.055s 00:05:10.617 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.617 13:35:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 ************************************ 00:05:10.617 END TEST rpc_integrity 00:05:10.617 ************************************ 00:05:10.617 13:35:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.617 13:35:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.617 13:35:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.617 13:35:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.617 13:35:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 ************************************ 00:05:10.617 START TEST rpc_plugins 00:05:10.617 ************************************ 00:05:10.617 13:35:36 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:10.617 13:35:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.617 13:35:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.617 13:35:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.617 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.617 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.617 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.617 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.617 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.617 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.617 { 00:05:10.617 "name": "Malloc1", 00:05:10.617 "aliases": [ 00:05:10.617 "d03b6ff6-ae46-4496-9019-801416b6980d" 00:05:10.617 ], 00:05:10.618 "product_name": "Malloc disk", 00:05:10.618 "block_size": 4096, 00:05:10.618 "num_blocks": 256, 00:05:10.618 "uuid": "d03b6ff6-ae46-4496-9019-801416b6980d", 00:05:10.618 "assigned_rate_limits": { 00:05:10.618 "rw_ios_per_sec": 0, 00:05:10.618 "rw_mbytes_per_sec": 0, 00:05:10.618 "r_mbytes_per_sec": 0, 00:05:10.618 "w_mbytes_per_sec": 0 00:05:10.618 }, 00:05:10.618 "claimed": false, 00:05:10.618 "zoned": false, 00:05:10.618 "supported_io_types": { 00:05:10.618 "read": true, 00:05:10.618 "write": true, 00:05:10.618 "unmap": true, 00:05:10.618 "flush": true, 00:05:10.618 "reset": true, 00:05:10.618 "nvme_admin": false, 00:05:10.618 "nvme_io": false, 00:05:10.618 "nvme_io_md": false, 00:05:10.618 "write_zeroes": true, 00:05:10.618 "zcopy": true, 00:05:10.618 "get_zone_info": false, 00:05:10.618 "zone_management": false, 00:05:10.618 "zone_append": false, 00:05:10.618 "compare": false, 00:05:10.618 "compare_and_write": false, 00:05:10.618 "abort": true, 00:05:10.618 "seek_hole": false, 00:05:10.618 "seek_data": false, 00:05:10.618 "copy": true, 00:05:10.618 "nvme_iov_md": false 00:05:10.618 }, 00:05:10.618 "memory_domains": [ 00:05:10.618 { 00:05:10.618 "dma_device_id": "system", 00:05:10.618 "dma_device_type": 1 00:05:10.618 }, 00:05:10.618 { 00:05:10.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.618 "dma_device_type": 2 00:05:10.618 } 00:05:10.618 ], 00:05:10.618 "driver_specific": {} 00:05:10.618 } 00:05:10.618 ]' 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.618 13:35:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.618 00:05:10.618 real 0m0.144s 00:05:10.618 user 0m0.086s 00:05:10.618 sys 0m0.026s 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.618 13:35:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.618 ************************************ 00:05:10.618 END TEST rpc_plugins 00:05:10.618 ************************************ 00:05:10.876 13:35:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.876 13:35:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.876 13:35:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.877 13:35:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.877 13:35:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.877 ************************************ 00:05:10.877 START TEST rpc_trace_cmd_test 00:05:10.877 ************************************ 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.877 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2343361", 00:05:10.877 "tpoint_group_mask": "0x8", 00:05:10.877 "iscsi_conn": { 00:05:10.877 "mask": "0x2", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "scsi": { 00:05:10.877 "mask": "0x4", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "bdev": { 00:05:10.877 "mask": "0x8", 00:05:10.877 "tpoint_mask": "0xffffffffffffffff" 00:05:10.877 }, 00:05:10.877 "nvmf_rdma": { 00:05:10.877 "mask": "0x10", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "nvmf_tcp": { 00:05:10.877 "mask": "0x20", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "ftl": { 00:05:10.877 "mask": "0x40", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "blobfs": { 00:05:10.877 "mask": "0x80", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "dsa": { 00:05:10.877 "mask": "0x200", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "thread": { 00:05:10.877 "mask": "0x400", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "nvme_pcie": { 00:05:10.877 "mask": "0x800", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "iaa": { 00:05:10.877 "mask": "0x1000", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "nvme_tcp": { 00:05:10.877 "mask": "0x2000", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "bdev_nvme": { 00:05:10.877 "mask": "0x4000", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 }, 00:05:10.877 "sock": { 00:05:10.877 "mask": "0x8000", 00:05:10.877 "tpoint_mask": "0x0" 00:05:10.877 } 00:05:10.877 }' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.877 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.136 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.136 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.136 13:35:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.136 00:05:11.136 real 0m0.215s 00:05:11.136 user 0m0.173s 00:05:11.136 sys 0m0.036s 00:05:11.136 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 ************************************ 00:05:11.136 END TEST rpc_trace_cmd_test 00:05:11.136 ************************************ 00:05:11.136 13:35:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.136 13:35:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.136 13:35:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.136 13:35:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.136 13:35:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.136 13:35:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.136 13:35:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 ************************************ 00:05:11.136 START TEST rpc_daemon_integrity 00:05:11.136 ************************************ 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.136 { 00:05:11.136 "name": "Malloc2", 00:05:11.136 "aliases": [ 00:05:11.136 "22ae242a-2c57-4482-b0d4-9ab8ae8fe7a9" 00:05:11.136 ], 00:05:11.136 "product_name": "Malloc disk", 00:05:11.136 "block_size": 512, 00:05:11.136 "num_blocks": 16384, 00:05:11.136 "uuid": "22ae242a-2c57-4482-b0d4-9ab8ae8fe7a9", 00:05:11.136 "assigned_rate_limits": { 00:05:11.136 "rw_ios_per_sec": 0, 00:05:11.136 "rw_mbytes_per_sec": 0, 00:05:11.136 "r_mbytes_per_sec": 0, 00:05:11.136 "w_mbytes_per_sec": 0 00:05:11.136 }, 00:05:11.136 "claimed": false, 00:05:11.136 "zoned": false, 00:05:11.136 "supported_io_types": { 00:05:11.136 "read": true, 00:05:11.136 "write": true, 00:05:11.136 "unmap": true, 00:05:11.136 "flush": true, 00:05:11.136 "reset": true, 00:05:11.136 "nvme_admin": false, 00:05:11.136 "nvme_io": false, 00:05:11.136 "nvme_io_md": false, 00:05:11.136 "write_zeroes": true, 00:05:11.136 "zcopy": true, 00:05:11.136 "get_zone_info": false, 00:05:11.136 "zone_management": false, 00:05:11.136 "zone_append": false, 00:05:11.136 "compare": false, 00:05:11.136 "compare_and_write": false, 00:05:11.136 "abort": true, 00:05:11.136 "seek_hole": false, 00:05:11.136 "seek_data": false, 00:05:11.136 "copy": true, 00:05:11.136 "nvme_iov_md": false 00:05:11.136 }, 00:05:11.136 "memory_domains": [ 00:05:11.136 { 00:05:11.136 "dma_device_id": "system", 00:05:11.136 "dma_device_type": 1 00:05:11.136 }, 00:05:11.136 { 00:05:11.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.136 "dma_device_type": 2 00:05:11.136 } 00:05:11.136 ], 00:05:11.136 "driver_specific": {} 00:05:11.136 } 00:05:11.136 ]' 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.136 [2024-07-15 13:35:37.652863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.136 [2024-07-15 13:35:37.652896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.136 [2024-07-15 13:35:37.652912] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b5c850 00:05:11.136 [2024-07-15 13:35:37.652921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.136 [2024-07-15 13:35:37.653887] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.136 [2024-07-15 13:35:37.653908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.136 Passthru0 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.136 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.396 { 00:05:11.396 "name": "Malloc2", 00:05:11.396 "aliases": [ 00:05:11.396 "22ae242a-2c57-4482-b0d4-9ab8ae8fe7a9" 00:05:11.396 ], 00:05:11.396 "product_name": "Malloc disk", 00:05:11.396 "block_size": 512, 00:05:11.396 "num_blocks": 16384, 00:05:11.396 "uuid": "22ae242a-2c57-4482-b0d4-9ab8ae8fe7a9", 00:05:11.396 "assigned_rate_limits": { 00:05:11.396 "rw_ios_per_sec": 0, 00:05:11.396 "rw_mbytes_per_sec": 0, 00:05:11.396 "r_mbytes_per_sec": 0, 00:05:11.396 "w_mbytes_per_sec": 0 00:05:11.396 }, 00:05:11.396 "claimed": true, 00:05:11.396 "claim_type": "exclusive_write", 00:05:11.396 "zoned": false, 00:05:11.396 "supported_io_types": { 00:05:11.396 "read": true, 00:05:11.396 "write": true, 00:05:11.396 "unmap": true, 00:05:11.396 "flush": true, 00:05:11.396 "reset": true, 00:05:11.396 "nvme_admin": false, 00:05:11.396 "nvme_io": false, 00:05:11.396 "nvme_io_md": false, 00:05:11.396 "write_zeroes": true, 00:05:11.396 "zcopy": true, 00:05:11.396 "get_zone_info": false, 00:05:11.396 "zone_management": false, 00:05:11.396 "zone_append": false, 00:05:11.396 "compare": false, 00:05:11.396 "compare_and_write": false, 00:05:11.396 "abort": true, 00:05:11.396 "seek_hole": false, 00:05:11.396 "seek_data": false, 00:05:11.396 "copy": true, 00:05:11.396 "nvme_iov_md": false 00:05:11.396 }, 00:05:11.396 "memory_domains": [ 00:05:11.396 { 00:05:11.396 "dma_device_id": "system", 00:05:11.396 "dma_device_type": 1 00:05:11.396 }, 00:05:11.396 { 00:05:11.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.396 "dma_device_type": 2 00:05:11.396 } 00:05:11.396 ], 00:05:11.396 "driver_specific": {} 00:05:11.396 }, 00:05:11.396 { 00:05:11.396 "name": "Passthru0", 00:05:11.396 "aliases": [ 00:05:11.396 "61c2b473-1d24-5aa6-97c5-145ef48b661d" 00:05:11.396 ], 00:05:11.396 "product_name": "passthru", 00:05:11.396 "block_size": 512, 00:05:11.396 "num_blocks": 16384, 00:05:11.396 "uuid": "61c2b473-1d24-5aa6-97c5-145ef48b661d", 00:05:11.396 "assigned_rate_limits": { 00:05:11.396 "rw_ios_per_sec": 0, 00:05:11.396 "rw_mbytes_per_sec": 0, 00:05:11.396 "r_mbytes_per_sec": 0, 00:05:11.396 "w_mbytes_per_sec": 0 00:05:11.396 }, 00:05:11.396 "claimed": false, 00:05:11.396 "zoned": false, 00:05:11.396 "supported_io_types": { 00:05:11.396 "read": true, 00:05:11.396 "write": true, 00:05:11.396 "unmap": true, 00:05:11.396 "flush": true, 00:05:11.396 "reset": true, 00:05:11.396 "nvme_admin": false, 00:05:11.396 "nvme_io": false, 00:05:11.396 "nvme_io_md": false, 00:05:11.396 "write_zeroes": true, 00:05:11.396 "zcopy": true, 00:05:11.396 "get_zone_info": false, 00:05:11.396 "zone_management": false, 00:05:11.396 "zone_append": false, 00:05:11.396 "compare": false, 00:05:11.396 "compare_and_write": false, 00:05:11.396 "abort": true, 00:05:11.396 "seek_hole": false, 00:05:11.396 "seek_data": false, 00:05:11.396 "copy": true, 00:05:11.396 "nvme_iov_md": false 00:05:11.396 }, 00:05:11.396 "memory_domains": [ 00:05:11.396 { 00:05:11.396 "dma_device_id": "system", 00:05:11.396 "dma_device_type": 1 00:05:11.396 }, 00:05:11.396 { 00:05:11.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.396 "dma_device_type": 2 00:05:11.396 } 00:05:11.396 ], 00:05:11.396 "driver_specific": { 00:05:11.396 "passthru": { 00:05:11.396 "name": "Passthru0", 00:05:11.396 "base_bdev_name": "Malloc2" 00:05:11.396 } 00:05:11.396 } 00:05:11.396 } 00:05:11.396 ]' 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.396 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.397 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.397 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.397 13:35:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.397 00:05:11.397 real 0m0.294s 00:05:11.397 user 0m0.171s 00:05:11.397 sys 0m0.058s 00:05:11.397 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.397 13:35:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.397 ************************************ 00:05:11.397 END TEST rpc_daemon_integrity 00:05:11.397 ************************************ 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.397 13:35:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.397 13:35:37 rpc -- rpc/rpc.sh@84 -- # killprocess 2343361 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@948 -- # '[' -z 2343361 ']' 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@952 -- # kill -0 2343361 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@953 -- # uname 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2343361 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2343361' 00:05:11.397 killing process with pid 2343361 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@967 -- # kill 2343361 00:05:11.397 13:35:37 rpc -- common/autotest_common.sh@972 -- # wait 2343361 00:05:11.970 00:05:11.970 real 0m2.633s 00:05:11.970 user 0m3.280s 00:05:11.970 sys 0m0.854s 00:05:11.970 13:35:38 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.970 13:35:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.970 ************************************ 00:05:11.970 END TEST rpc 00:05:11.970 ************************************ 00:05:11.970 13:35:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.970 13:35:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.970 13:35:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.970 13:35:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.970 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:05:11.970 ************************************ 00:05:11.970 START TEST skip_rpc 00:05:11.970 ************************************ 00:05:11.970 13:35:38 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.970 * Looking for test storage... 00:05:11.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:11.970 13:35:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:11.970 13:35:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:11.970 13:35:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.970 13:35:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.970 13:35:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.970 13:35:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.970 ************************************ 00:05:11.970 START TEST skip_rpc 00:05:11.970 ************************************ 00:05:11.970 13:35:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:11.970 13:35:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2343901 00:05:11.970 13:35:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.970 13:35:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.970 13:35:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:12.229 [2024-07-15 13:35:38.543902] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:12.229 [2024-07-15 13:35:38.543952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343901 ] 00:05:12.229 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.229 [2024-07-15 13:35:38.627630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.229 [2024-07-15 13:35:38.709343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2343901 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2343901 ']' 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2343901 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2343901 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2343901' 00:05:17.498 killing process with pid 2343901 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2343901 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2343901 00:05:17.498 00:05:17.498 real 0m5.421s 00:05:17.498 user 0m5.145s 00:05:17.498 sys 0m0.310s 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.498 13:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.498 ************************************ 00:05:17.498 END TEST skip_rpc 00:05:17.498 ************************************ 00:05:17.498 13:35:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.498 13:35:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.498 13:35:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.498 13:35:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.498 13:35:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.498 ************************************ 00:05:17.498 START TEST skip_rpc_with_json 00:05:17.498 ************************************ 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2344657 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2344657 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2344657 ']' 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.498 13:35:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.757 [2024-07-15 13:35:44.047653] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:17.757 [2024-07-15 13:35:44.047712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344657 ] 00:05:17.757 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.757 [2024-07-15 13:35:44.133612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.757 [2024-07-15 13:35:44.224064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.325 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.325 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:18.325 13:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.325 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.325 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 [2024-07-15 13:35:44.851738] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.584 request: 00:05:18.584 { 00:05:18.584 "trtype": "tcp", 00:05:18.584 "method": "nvmf_get_transports", 00:05:18.584 "req_id": 1 00:05:18.584 } 00:05:18.584 Got JSON-RPC error response 00:05:18.584 response: 00:05:18.584 { 00:05:18.584 "code": -19, 00:05:18.584 "message": "No such device" 00:05:18.584 } 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 [2024-07-15 13:35:44.863854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.584 13:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.584 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.584 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:18.584 { 00:05:18.584 "subsystems": [ 00:05:18.584 { 00:05:18.584 "subsystem": "keyring", 00:05:18.584 "config": [] 00:05:18.584 }, 00:05:18.584 { 00:05:18.584 "subsystem": "iobuf", 00:05:18.584 "config": [ 00:05:18.584 { 00:05:18.584 "method": "iobuf_set_options", 00:05:18.584 "params": { 00:05:18.584 "small_pool_count": 8192, 00:05:18.584 "large_pool_count": 1024, 00:05:18.584 "small_bufsize": 8192, 00:05:18.584 "large_bufsize": 135168 00:05:18.584 } 00:05:18.584 } 00:05:18.584 ] 00:05:18.584 }, 00:05:18.584 { 00:05:18.584 "subsystem": "sock", 00:05:18.584 "config": [ 00:05:18.584 { 00:05:18.584 "method": "sock_set_default_impl", 00:05:18.584 "params": { 00:05:18.584 "impl_name": "posix" 00:05:18.584 } 00:05:18.584 }, 00:05:18.584 { 00:05:18.584 "method": "sock_impl_set_options", 00:05:18.584 "params": { 00:05:18.584 "impl_name": "ssl", 00:05:18.584 "recv_buf_size": 4096, 00:05:18.584 "send_buf_size": 4096, 00:05:18.584 "enable_recv_pipe": true, 00:05:18.584 "enable_quickack": false, 00:05:18.584 "enable_placement_id": 0, 00:05:18.584 "enable_zerocopy_send_server": true, 00:05:18.584 "enable_zerocopy_send_client": false, 00:05:18.584 "zerocopy_threshold": 0, 00:05:18.584 "tls_version": 0, 00:05:18.584 "enable_ktls": false 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "sock_impl_set_options", 00:05:18.585 "params": { 00:05:18.585 "impl_name": "posix", 00:05:18.585 "recv_buf_size": 2097152, 00:05:18.585 "send_buf_size": 2097152, 00:05:18.585 "enable_recv_pipe": true, 00:05:18.585 "enable_quickack": false, 00:05:18.585 "enable_placement_id": 0, 00:05:18.585 "enable_zerocopy_send_server": true, 00:05:18.585 "enable_zerocopy_send_client": false, 00:05:18.585 "zerocopy_threshold": 0, 00:05:18.585 "tls_version": 0, 00:05:18.585 "enable_ktls": false 00:05:18.585 } 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "vmd", 00:05:18.585 "config": [] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "accel", 00:05:18.585 "config": [ 00:05:18.585 { 00:05:18.585 "method": "accel_set_options", 00:05:18.585 "params": { 00:05:18.585 "small_cache_size": 128, 00:05:18.585 "large_cache_size": 16, 00:05:18.585 "task_count": 2048, 00:05:18.585 "sequence_count": 2048, 00:05:18.585 "buf_count": 2048 00:05:18.585 } 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "bdev", 00:05:18.585 "config": [ 00:05:18.585 { 00:05:18.585 "method": "bdev_set_options", 00:05:18.585 "params": { 00:05:18.585 "bdev_io_pool_size": 65535, 00:05:18.585 "bdev_io_cache_size": 256, 00:05:18.585 "bdev_auto_examine": true, 00:05:18.585 "iobuf_small_cache_size": 128, 00:05:18.585 "iobuf_large_cache_size": 16 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "bdev_raid_set_options", 00:05:18.585 "params": { 00:05:18.585 "process_window_size_kb": 1024 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "bdev_iscsi_set_options", 00:05:18.585 "params": { 00:05:18.585 "timeout_sec": 30 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "bdev_nvme_set_options", 00:05:18.585 "params": { 00:05:18.585 "action_on_timeout": "none", 00:05:18.585 "timeout_us": 0, 00:05:18.585 "timeout_admin_us": 0, 00:05:18.585 "keep_alive_timeout_ms": 10000, 00:05:18.585 "arbitration_burst": 0, 00:05:18.585 "low_priority_weight": 0, 00:05:18.585 "medium_priority_weight": 0, 00:05:18.585 "high_priority_weight": 0, 00:05:18.585 "nvme_adminq_poll_period_us": 10000, 00:05:18.585 "nvme_ioq_poll_period_us": 0, 00:05:18.585 "io_queue_requests": 0, 00:05:18.585 "delay_cmd_submit": true, 00:05:18.585 "transport_retry_count": 4, 00:05:18.585 "bdev_retry_count": 3, 00:05:18.585 "transport_ack_timeout": 0, 00:05:18.585 "ctrlr_loss_timeout_sec": 0, 00:05:18.585 "reconnect_delay_sec": 0, 00:05:18.585 "fast_io_fail_timeout_sec": 0, 00:05:18.585 "disable_auto_failback": false, 00:05:18.585 "generate_uuids": false, 00:05:18.585 "transport_tos": 0, 00:05:18.585 "nvme_error_stat": false, 00:05:18.585 "rdma_srq_size": 0, 00:05:18.585 "io_path_stat": false, 00:05:18.585 "allow_accel_sequence": false, 00:05:18.585 "rdma_max_cq_size": 0, 00:05:18.585 "rdma_cm_event_timeout_ms": 0, 00:05:18.585 "dhchap_digests": [ 00:05:18.585 "sha256", 00:05:18.585 "sha384", 00:05:18.585 "sha512" 00:05:18.585 ], 00:05:18.585 "dhchap_dhgroups": [ 00:05:18.585 "null", 00:05:18.585 "ffdhe2048", 00:05:18.585 "ffdhe3072", 00:05:18.585 "ffdhe4096", 00:05:18.585 "ffdhe6144", 00:05:18.585 "ffdhe8192" 00:05:18.585 ] 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "bdev_nvme_set_hotplug", 00:05:18.585 "params": { 00:05:18.585 "period_us": 100000, 00:05:18.585 "enable": false 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "bdev_wait_for_examine" 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "scsi", 00:05:18.585 "config": null 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "scheduler", 00:05:18.585 "config": [ 00:05:18.585 { 00:05:18.585 "method": "framework_set_scheduler", 00:05:18.585 "params": { 00:05:18.585 "name": "static" 00:05:18.585 } 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "vhost_scsi", 00:05:18.585 "config": [] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "vhost_blk", 00:05:18.585 "config": [] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "ublk", 00:05:18.585 "config": [] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "nbd", 00:05:18.585 "config": [] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "nvmf", 00:05:18.585 "config": [ 00:05:18.585 { 00:05:18.585 "method": "nvmf_set_config", 00:05:18.585 "params": { 00:05:18.585 "discovery_filter": "match_any", 00:05:18.585 "admin_cmd_passthru": { 00:05:18.585 "identify_ctrlr": false 00:05:18.585 } 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "nvmf_set_max_subsystems", 00:05:18.585 "params": { 00:05:18.585 "max_subsystems": 1024 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "nvmf_set_crdt", 00:05:18.585 "params": { 00:05:18.585 "crdt1": 0, 00:05:18.585 "crdt2": 0, 00:05:18.585 "crdt3": 0 00:05:18.585 } 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "method": "nvmf_create_transport", 00:05:18.585 "params": { 00:05:18.585 "trtype": "TCP", 00:05:18.585 "max_queue_depth": 128, 00:05:18.585 "max_io_qpairs_per_ctrlr": 127, 00:05:18.585 "in_capsule_data_size": 4096, 00:05:18.585 "max_io_size": 131072, 00:05:18.585 "io_unit_size": 131072, 00:05:18.585 "max_aq_depth": 128, 00:05:18.585 "num_shared_buffers": 511, 00:05:18.585 "buf_cache_size": 4294967295, 00:05:18.585 "dif_insert_or_strip": false, 00:05:18.585 "zcopy": false, 00:05:18.585 "c2h_success": true, 00:05:18.585 "sock_priority": 0, 00:05:18.585 "abort_timeout_sec": 1, 00:05:18.585 "ack_timeout": 0, 00:05:18.585 "data_wr_pool_size": 0 00:05:18.585 } 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 }, 00:05:18.585 { 00:05:18.585 "subsystem": "iscsi", 00:05:18.585 "config": [ 00:05:18.585 { 00:05:18.585 "method": "iscsi_set_options", 00:05:18.585 "params": { 00:05:18.585 "node_base": "iqn.2016-06.io.spdk", 00:05:18.585 "max_sessions": 128, 00:05:18.585 "max_connections_per_session": 2, 00:05:18.585 "max_queue_depth": 64, 00:05:18.585 "default_time2wait": 2, 00:05:18.585 "default_time2retain": 20, 00:05:18.585 "first_burst_length": 8192, 00:05:18.585 "immediate_data": true, 00:05:18.585 "allow_duplicated_isid": false, 00:05:18.585 "error_recovery_level": 0, 00:05:18.585 "nop_timeout": 60, 00:05:18.585 "nop_in_interval": 30, 00:05:18.585 "disable_chap": false, 00:05:18.585 "require_chap": false, 00:05:18.585 "mutual_chap": false, 00:05:18.585 "chap_group": 0, 00:05:18.585 "max_large_datain_per_connection": 64, 00:05:18.585 "max_r2t_per_connection": 4, 00:05:18.585 "pdu_pool_size": 36864, 00:05:18.585 "immediate_data_pool_size": 16384, 00:05:18.585 "data_out_pool_size": 2048 00:05:18.585 } 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 } 00:05:18.585 ] 00:05:18.585 } 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2344657 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2344657 ']' 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2344657 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344657 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344657' 00:05:18.585 killing process with pid 2344657 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2344657 00:05:18.585 13:35:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2344657 00:05:19.155 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2344857 00:05:19.155 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:19.155 13:35:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2344857 ']' 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2344857' 00:05:24.428 killing process with pid 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2344857 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:24.428 13:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:24.428 00:05:24.428 real 0m6.849s 00:05:24.429 user 0m6.567s 00:05:24.429 sys 0m0.732s 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.429 ************************************ 00:05:24.429 END TEST skip_rpc_with_json 00:05:24.429 ************************************ 00:05:24.429 13:35:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.429 13:35:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.429 13:35:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.429 13:35:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.429 13:35:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.429 ************************************ 00:05:24.429 START TEST skip_rpc_with_delay 00:05:24.429 ************************************ 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:24.429 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.687 [2024-07-15 13:35:50.988787] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.687 [2024-07-15 13:35:50.988877] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.687 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:24.687 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.687 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.687 13:35:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.687 00:05:24.687 real 0m0.075s 00:05:24.687 user 0m0.046s 00:05:24.687 sys 0m0.029s 00:05:24.687 13:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.687 13:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.687 ************************************ 00:05:24.687 END TEST skip_rpc_with_delay 00:05:24.687 ************************************ 00:05:24.687 13:35:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:24.687 13:35:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.687 13:35:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.687 13:35:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.687 13:35:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.687 13:35:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.687 13:35:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.687 ************************************ 00:05:24.687 START TEST exit_on_failed_rpc_init 00:05:24.687 ************************************ 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2345636 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2345636 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2345636 ']' 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.687 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.687 [2024-07-15 13:35:51.148525] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:24.687 [2024-07-15 13:35:51.148592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2345636 ] 00:05:24.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.945 [2024-07-15 13:35:51.234706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.945 [2024-07-15 13:35:51.326142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.511 13:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.511 [2024-07-15 13:35:52.019124] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:25.511 [2024-07-15 13:35:52.019182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2345794 ] 00:05:25.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.769 [2024-07-15 13:35:52.104652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.769 [2024-07-15 13:35:52.186158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.769 [2024-07-15 13:35:52.186254] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.769 [2024-07-15 13:35:52.186266] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.769 [2024-07-15 13:35:52.186274] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2345636 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2345636 ']' 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2345636 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.769 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2345636 00:05:26.028 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.028 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.028 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2345636' 00:05:26.028 killing process with pid 2345636 00:05:26.028 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2345636 00:05:26.028 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2345636 00:05:26.286 00:05:26.286 real 0m1.584s 00:05:26.286 user 0m1.770s 00:05:26.286 sys 0m0.515s 00:05:26.286 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.287 13:35:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.287 ************************************ 00:05:26.287 END TEST exit_on_failed_rpc_init 00:05:26.287 ************************************ 00:05:26.287 13:35:52 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:26.287 13:35:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:26.287 00:05:26.287 real 0m14.382s 00:05:26.287 user 0m13.693s 00:05:26.287 sys 0m1.911s 00:05:26.287 13:35:52 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.287 13:35:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.287 ************************************ 00:05:26.287 END TEST skip_rpc 00:05:26.287 ************************************ 00:05:26.287 13:35:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.287 13:35:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.287 13:35:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.287 13:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.287 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.287 ************************************ 00:05:26.287 START TEST rpc_client 00:05:26.287 ************************************ 00:05:26.287 13:35:52 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:26.545 * Looking for test storage... 00:05:26.545 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:26.545 13:35:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:26.545 OK 00:05:26.545 13:35:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.545 00:05:26.545 real 0m0.138s 00:05:26.545 user 0m0.063s 00:05:26.545 sys 0m0.086s 00:05:26.545 13:35:52 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.545 13:35:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.545 ************************************ 00:05:26.545 END TEST rpc_client 00:05:26.545 ************************************ 00:05:26.545 13:35:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.545 13:35:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.545 13:35:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.545 13:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.545 13:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.545 ************************************ 00:05:26.545 START TEST json_config 00:05:26.545 ************************************ 00:05:26.545 13:35:53 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:26.805 13:35:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.805 13:35:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.805 13:35:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.805 13:35:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.805 13:35:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.805 13:35:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.805 13:35:53 json_config -- paths/export.sh@5 -- # export PATH 00:05:26.805 13:35:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@47 -- # : 0 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:26.805 13:35:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:26.805 INFO: JSON configuration test init 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:26.805 13:35:53 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:26.805 13:35:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.805 13:35:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.806 13:35:53 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.806 13:35:53 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:26.806 13:35:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:26.806 13:35:53 json_config -- json_config/common.sh@10 -- # shift 00:05:26.806 13:35:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.806 13:35:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.806 13:35:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.806 13:35:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.806 13:35:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.806 13:35:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2345995 00:05:26.806 13:35:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.806 Waiting for target to run... 00:05:26.806 13:35:53 json_config -- json_config/common.sh@25 -- # waitforlisten 2345995 /var/tmp/spdk_tgt.sock 00:05:26.806 13:35:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 2345995 ']' 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.806 13:35:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.806 [2024-07-15 13:35:53.218739] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:26.806 [2024-07-15 13:35:53.218813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2345995 ] 00:05:26.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.074 [2024-07-15 13:35:53.548193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.333 [2024-07-15 13:35:53.621723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.592 13:35:54 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.592 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.592 13:35:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.592 13:35:54 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:27.592 13:35:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:30.883 13:35:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:30.883 13:35:57 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:30.883 13:35:57 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:30.883 13:35:57 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.142 13:35:57 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:31.142 13:35:57 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:31.142 13:35:57 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:31.142 13:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:37.794 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:37.794 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:37.794 13:36:03 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:37.795 Found net devices under 0000:18:00.0: mlx_0_0 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:37.795 Found net devices under 0000:18:00.1: mlx_0_1 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@58 -- # uname 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:37.795 13:36:03 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:37.795 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:37.795 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:05:37.795 altname enp24s0f0np0 00:05:37.795 altname ens785f0np0 00:05:37.795 inet 192.168.100.8/24 scope global mlx_0_0 00:05:37.795 valid_lft forever preferred_lft forever 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:37.795 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:37.795 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:05:37.795 altname enp24s0f1np1 00:05:37.795 altname ens785f1np1 00:05:37.795 inet 192.168.100.9/24 scope global mlx_0_1 00:05:37.795 valid_lft forever preferred_lft forever 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@422 -- # return 0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:37.795 192.168.100.9' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:37.795 192.168.100.9' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:37.795 192.168.100.9' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:37.795 13:36:04 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:37.795 13:36:04 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:37.795 13:36:04 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:37.795 13:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.055 MallocForNvmf0 00:05:38.055 13:36:04 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.055 13:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.055 MallocForNvmf1 00:05:38.055 13:36:04 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:38.055 13:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:38.314 [2024-07-15 13:36:04.733574] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:38.314 [2024-07-15 13:36:04.760101] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a62b0/0x833200) succeed. 00:05:38.314 [2024-07-15 13:36:04.772231] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a84a0/0x7b31a0) succeed. 00:05:38.314 13:36:04 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.315 13:36:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.574 13:36:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.574 13:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.833 13:36:05 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.833 13:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.092 13:36:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:39.092 13:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:39.092 [2024-07-15 13:36:05.565790] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:39.092 13:36:05 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:39.092 13:36:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.092 13:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.351 13:36:05 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:39.351 13:36:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.351 13:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.351 13:36:05 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:39.351 13:36:05 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.352 13:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.352 MallocBdevForConfigChangeCheck 00:05:39.611 13:36:05 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:39.611 13:36:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.611 13:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.611 13:36:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:39.611 13:36:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.869 13:36:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:39.870 INFO: shutting down applications... 00:05:39.870 13:36:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:39.870 13:36:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:39.870 13:36:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:39.870 13:36:06 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:48.020 Calling clear_iscsi_subsystem 00:05:48.020 Calling clear_nvmf_subsystem 00:05:48.020 Calling clear_nbd_subsystem 00:05:48.020 Calling clear_ublk_subsystem 00:05:48.020 Calling clear_vhost_blk_subsystem 00:05:48.020 Calling clear_vhost_scsi_subsystem 00:05:48.020 Calling clear_bdev_subsystem 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@345 -- # break 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:48.020 13:36:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:48.020 13:36:13 json_config -- json_config/common.sh@31 -- # local app=target 00:05:48.020 13:36:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.020 13:36:13 json_config -- json_config/common.sh@35 -- # [[ -n 2345995 ]] 00:05:48.020 13:36:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2345995 00:05:48.020 13:36:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.020 13:36:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.020 13:36:13 json_config -- json_config/common.sh@41 -- # kill -0 2345995 00:05:48.020 13:36:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.020 13:36:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.020 13:36:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.020 13:36:14 json_config -- json_config/common.sh@41 -- # kill -0 2345995 00:05:48.020 13:36:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.020 13:36:14 json_config -- json_config/common.sh@43 -- # break 00:05:48.020 13:36:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.020 13:36:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.020 SPDK target shutdown done 00:05:48.020 13:36:14 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:48.020 INFO: relaunching applications... 00:05:48.020 13:36:14 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.020 13:36:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:48.020 13:36:14 json_config -- json_config/common.sh@10 -- # shift 00:05:48.020 13:36:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.020 13:36:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.020 13:36:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.020 13:36:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.020 13:36:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.021 13:36:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2351415 00:05:48.021 13:36:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.021 Waiting for target to run... 00:05:48.021 13:36:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.021 13:36:14 json_config -- json_config/common.sh@25 -- # waitforlisten 2351415 /var/tmp/spdk_tgt.sock 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 2351415 ']' 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.021 13:36:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.021 [2024-07-15 13:36:14.281064] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:05:48.021 [2024-07-15 13:36:14.281134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351415 ] 00:05:48.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.279 [2024-07-15 13:36:14.594721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.279 [2024-07-15 13:36:14.668781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.565 [2024-07-15 13:36:17.715299] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2984990/0x2909660) succeed. 00:05:51.565 [2024-07-15 13:36:17.726836] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x29893f0/0x2989700) succeed. 00:05:51.565 [2024-07-15 13:36:17.776776] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:52.134 13:36:18 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.134 13:36:18 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:52.134 13:36:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.134 00:05:52.134 13:36:18 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:52.134 13:36:18 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:52.134 INFO: Checking if target configuration is the same... 00:05:52.134 13:36:18 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:52.134 13:36:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.134 13:36:18 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.134 + '[' 2 -ne 2 ']' 00:05:52.134 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:52.134 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:52.134 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:52.134 +++ basename /dev/fd/62 00:05:52.134 ++ mktemp /tmp/62.XXX 00:05:52.134 + tmp_file_1=/tmp/62.V6V 00:05:52.134 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.134 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.134 + tmp_file_2=/tmp/spdk_tgt_config.json.E9M 00:05:52.134 + ret=0 00:05:52.134 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.393 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.393 + diff -u /tmp/62.V6V /tmp/spdk_tgt_config.json.E9M 00:05:52.393 + echo 'INFO: JSON config files are the same' 00:05:52.393 INFO: JSON config files are the same 00:05:52.393 + rm /tmp/62.V6V /tmp/spdk_tgt_config.json.E9M 00:05:52.393 + exit 0 00:05:52.393 13:36:18 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:52.393 13:36:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:52.393 INFO: changing configuration and checking if this can be detected... 00:05:52.393 13:36:18 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.393 13:36:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:52.653 13:36:19 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:52.653 13:36:19 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.653 13:36:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.653 + '[' 2 -ne 2 ']' 00:05:52.653 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:52.653 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:52.653 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:52.653 +++ basename /dev/fd/62 00:05:52.653 ++ mktemp /tmp/62.XXX 00:05:52.653 + tmp_file_1=/tmp/62.L37 00:05:52.653 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.653 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:52.653 + tmp_file_2=/tmp/spdk_tgt_config.json.FUH 00:05:52.653 + ret=0 00:05:52.653 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.912 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:52.912 + diff -u /tmp/62.L37 /tmp/spdk_tgt_config.json.FUH 00:05:52.912 + ret=1 00:05:52.912 + echo '=== Start of file: /tmp/62.L37 ===' 00:05:52.912 + cat /tmp/62.L37 00:05:52.912 + echo '=== End of file: /tmp/62.L37 ===' 00:05:52.912 + echo '' 00:05:52.912 + echo '=== Start of file: /tmp/spdk_tgt_config.json.FUH ===' 00:05:52.912 + cat /tmp/spdk_tgt_config.json.FUH 00:05:52.912 + echo '=== End of file: /tmp/spdk_tgt_config.json.FUH ===' 00:05:52.912 + echo '' 00:05:52.912 + rm /tmp/62.L37 /tmp/spdk_tgt_config.json.FUH 00:05:52.912 + exit 1 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:52.912 INFO: configuration change detected. 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 2351415 ]] 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:52.912 13:36:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.912 13:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.171 13:36:19 json_config -- json_config/json_config.sh@323 -- # killprocess 2351415 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 2351415 ']' 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@952 -- # kill -0 2351415 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@953 -- # uname 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2351415 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2351415' 00:05:53.171 killing process with pid 2351415 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@967 -- # kill 2351415 00:05:53.171 13:36:19 json_config -- common/autotest_common.sh@972 -- # wait 2351415 00:06:01.297 13:36:26 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.297 13:36:26 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:01.297 13:36:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.297 13:36:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.297 13:36:26 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:01.297 13:36:26 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:01.297 INFO: Success 00:06:01.297 13:36:26 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@117 -- # sync 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:01.297 13:36:26 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:01.297 00:06:01.297 real 0m33.658s 00:06:01.297 user 0m36.414s 00:06:01.297 sys 0m7.370s 00:06:01.297 13:36:26 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.297 13:36:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.297 ************************************ 00:06:01.297 END TEST json_config 00:06:01.297 ************************************ 00:06:01.297 13:36:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.297 13:36:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.297 13:36:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.297 13:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.297 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.297 ************************************ 00:06:01.297 START TEST json_config_extra_key 00:06:01.297 ************************************ 00:06:01.297 13:36:26 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:01.297 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.297 13:36:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:01.297 13:36:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.297 13:36:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.297 13:36:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.297 13:36:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.298 13:36:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.298 13:36:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.298 13:36:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:01.298 13:36:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.298 13:36:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:01.298 INFO: launching applications... 00:06:01.298 13:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2353173 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.298 Waiting for target to run... 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2353173 /var/tmp/spdk_tgt.sock 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2353173 ']' 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.298 13:36:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.298 13:36:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.298 [2024-07-15 13:36:26.945956] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:01.298 [2024-07-15 13:36:26.946021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353173 ] 00:06:01.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.298 [2024-07-15 13:36:27.483102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.298 [2024-07-15 13:36:27.577969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.298 13:36:27 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.298 13:36:27 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:01.298 00:06:01.298 13:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:01.298 INFO: shutting down applications... 00:06:01.298 13:36:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2353173 ]] 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2353173 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2353173 00:06:01.298 13:36:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2353173 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.867 13:36:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.867 SPDK target shutdown done 00:06:01.867 13:36:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.867 Success 00:06:01.867 00:06:01.867 real 0m1.495s 00:06:01.867 user 0m1.050s 00:06:01.867 sys 0m0.666s 00:06:01.867 13:36:28 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.867 13:36:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.867 ************************************ 00:06:01.867 END TEST json_config_extra_key 00:06:01.867 ************************************ 00:06:01.867 13:36:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.867 13:36:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.867 13:36:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.867 13:36:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.867 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:01.867 ************************************ 00:06:01.867 START TEST alias_rpc 00:06:01.867 ************************************ 00:06:01.867 13:36:28 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.126 * Looking for test storage... 00:06:02.126 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:02.126 13:36:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.126 13:36:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2353408 00:06:02.126 13:36:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2353408 00:06:02.126 13:36:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2353408 ']' 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.126 13:36:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.126 [2024-07-15 13:36:28.524060] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:02.126 [2024-07-15 13:36:28.524122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353408 ] 00:06:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.127 [2024-07-15 13:36:28.609827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.385 [2024-07-15 13:36:28.697634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.952 13:36:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.953 13:36:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:02.953 13:36:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:03.211 13:36:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2353408 00:06:03.211 13:36:29 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2353408 ']' 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2353408 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353408 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353408' 00:06:03.212 killing process with pid 2353408 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@967 -- # kill 2353408 00:06:03.212 13:36:29 alias_rpc -- common/autotest_common.sh@972 -- # wait 2353408 00:06:03.471 00:06:03.471 real 0m1.572s 00:06:03.471 user 0m1.652s 00:06:03.471 sys 0m0.484s 00:06:03.471 13:36:29 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.471 13:36:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.471 ************************************ 00:06:03.471 END TEST alias_rpc 00:06:03.471 ************************************ 00:06:03.471 13:36:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.471 13:36:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:03.471 13:36:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.471 13:36:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.471 13:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.471 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:06:03.730 ************************************ 00:06:03.730 START TEST spdkcli_tcp 00:06:03.730 ************************************ 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:03.730 * Looking for test storage... 00:06:03.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2353660 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2353660 00:06:03.730 13:36:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2353660 ']' 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.730 13:36:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.730 [2024-07-15 13:36:30.187512] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:03.730 [2024-07-15 13:36:30.187581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353660 ] 00:06:03.730 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.988 [2024-07-15 13:36:30.275170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.988 [2024-07-15 13:36:30.367845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.989 [2024-07-15 13:36:30.367847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.557 13:36:30 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.557 13:36:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:04.557 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2353830 00:06:04.557 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.557 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:04.815 [ 00:06:04.816 "bdev_malloc_delete", 00:06:04.816 "bdev_malloc_create", 00:06:04.816 "bdev_null_resize", 00:06:04.816 "bdev_null_delete", 00:06:04.816 "bdev_null_create", 00:06:04.816 "bdev_nvme_cuse_unregister", 00:06:04.816 "bdev_nvme_cuse_register", 00:06:04.816 "bdev_opal_new_user", 00:06:04.816 "bdev_opal_set_lock_state", 00:06:04.816 "bdev_opal_delete", 00:06:04.816 "bdev_opal_get_info", 00:06:04.816 "bdev_opal_create", 00:06:04.816 "bdev_nvme_opal_revert", 00:06:04.816 "bdev_nvme_opal_init", 00:06:04.816 "bdev_nvme_send_cmd", 00:06:04.816 "bdev_nvme_get_path_iostat", 00:06:04.816 "bdev_nvme_get_mdns_discovery_info", 00:06:04.816 "bdev_nvme_stop_mdns_discovery", 00:06:04.816 "bdev_nvme_start_mdns_discovery", 00:06:04.816 "bdev_nvme_set_multipath_policy", 00:06:04.816 "bdev_nvme_set_preferred_path", 00:06:04.816 "bdev_nvme_get_io_paths", 00:06:04.816 "bdev_nvme_remove_error_injection", 00:06:04.816 "bdev_nvme_add_error_injection", 00:06:04.816 "bdev_nvme_get_discovery_info", 00:06:04.816 "bdev_nvme_stop_discovery", 00:06:04.816 "bdev_nvme_start_discovery", 00:06:04.816 "bdev_nvme_get_controller_health_info", 00:06:04.816 "bdev_nvme_disable_controller", 00:06:04.816 "bdev_nvme_enable_controller", 00:06:04.816 "bdev_nvme_reset_controller", 00:06:04.816 "bdev_nvme_get_transport_statistics", 00:06:04.816 "bdev_nvme_apply_firmware", 00:06:04.816 "bdev_nvme_detach_controller", 00:06:04.816 "bdev_nvme_get_controllers", 00:06:04.816 "bdev_nvme_attach_controller", 00:06:04.816 "bdev_nvme_set_hotplug", 00:06:04.816 "bdev_nvme_set_options", 00:06:04.816 "bdev_passthru_delete", 00:06:04.816 "bdev_passthru_create", 00:06:04.816 "bdev_lvol_set_parent_bdev", 00:06:04.816 "bdev_lvol_set_parent", 00:06:04.816 "bdev_lvol_check_shallow_copy", 00:06:04.816 "bdev_lvol_start_shallow_copy", 00:06:04.816 "bdev_lvol_grow_lvstore", 00:06:04.816 "bdev_lvol_get_lvols", 00:06:04.816 "bdev_lvol_get_lvstores", 00:06:04.816 "bdev_lvol_delete", 00:06:04.816 "bdev_lvol_set_read_only", 00:06:04.816 "bdev_lvol_resize", 00:06:04.816 "bdev_lvol_decouple_parent", 00:06:04.816 "bdev_lvol_inflate", 00:06:04.816 "bdev_lvol_rename", 00:06:04.816 "bdev_lvol_clone_bdev", 00:06:04.816 "bdev_lvol_clone", 00:06:04.816 "bdev_lvol_snapshot", 00:06:04.816 "bdev_lvol_create", 00:06:04.816 "bdev_lvol_delete_lvstore", 00:06:04.816 "bdev_lvol_rename_lvstore", 00:06:04.816 "bdev_lvol_create_lvstore", 00:06:04.816 "bdev_raid_set_options", 00:06:04.816 "bdev_raid_remove_base_bdev", 00:06:04.816 "bdev_raid_add_base_bdev", 00:06:04.816 "bdev_raid_delete", 00:06:04.816 "bdev_raid_create", 00:06:04.816 "bdev_raid_get_bdevs", 00:06:04.816 "bdev_error_inject_error", 00:06:04.816 "bdev_error_delete", 00:06:04.816 "bdev_error_create", 00:06:04.816 "bdev_split_delete", 00:06:04.816 "bdev_split_create", 00:06:04.816 "bdev_delay_delete", 00:06:04.816 "bdev_delay_create", 00:06:04.816 "bdev_delay_update_latency", 00:06:04.816 "bdev_zone_block_delete", 00:06:04.816 "bdev_zone_block_create", 00:06:04.816 "blobfs_create", 00:06:04.816 "blobfs_detect", 00:06:04.816 "blobfs_set_cache_size", 00:06:04.816 "bdev_aio_delete", 00:06:04.816 "bdev_aio_rescan", 00:06:04.816 "bdev_aio_create", 00:06:04.816 "bdev_ftl_set_property", 00:06:04.816 "bdev_ftl_get_properties", 00:06:04.816 "bdev_ftl_get_stats", 00:06:04.816 "bdev_ftl_unmap", 00:06:04.816 "bdev_ftl_unload", 00:06:04.816 "bdev_ftl_delete", 00:06:04.816 "bdev_ftl_load", 00:06:04.816 "bdev_ftl_create", 00:06:04.816 "bdev_virtio_attach_controller", 00:06:04.816 "bdev_virtio_scsi_get_devices", 00:06:04.816 "bdev_virtio_detach_controller", 00:06:04.816 "bdev_virtio_blk_set_hotplug", 00:06:04.816 "bdev_iscsi_delete", 00:06:04.816 "bdev_iscsi_create", 00:06:04.816 "bdev_iscsi_set_options", 00:06:04.816 "accel_error_inject_error", 00:06:04.816 "ioat_scan_accel_module", 00:06:04.816 "dsa_scan_accel_module", 00:06:04.816 "iaa_scan_accel_module", 00:06:04.816 "keyring_file_remove_key", 00:06:04.816 "keyring_file_add_key", 00:06:04.816 "keyring_linux_set_options", 00:06:04.816 "iscsi_get_histogram", 00:06:04.816 "iscsi_enable_histogram", 00:06:04.816 "iscsi_set_options", 00:06:04.816 "iscsi_get_auth_groups", 00:06:04.816 "iscsi_auth_group_remove_secret", 00:06:04.816 "iscsi_auth_group_add_secret", 00:06:04.816 "iscsi_delete_auth_group", 00:06:04.816 "iscsi_create_auth_group", 00:06:04.816 "iscsi_set_discovery_auth", 00:06:04.816 "iscsi_get_options", 00:06:04.816 "iscsi_target_node_request_logout", 00:06:04.816 "iscsi_target_node_set_redirect", 00:06:04.816 "iscsi_target_node_set_auth", 00:06:04.816 "iscsi_target_node_add_lun", 00:06:04.816 "iscsi_get_stats", 00:06:04.816 "iscsi_get_connections", 00:06:04.816 "iscsi_portal_group_set_auth", 00:06:04.816 "iscsi_start_portal_group", 00:06:04.816 "iscsi_delete_portal_group", 00:06:04.816 "iscsi_create_portal_group", 00:06:04.816 "iscsi_get_portal_groups", 00:06:04.816 "iscsi_delete_target_node", 00:06:04.816 "iscsi_target_node_remove_pg_ig_maps", 00:06:04.816 "iscsi_target_node_add_pg_ig_maps", 00:06:04.816 "iscsi_create_target_node", 00:06:04.816 "iscsi_get_target_nodes", 00:06:04.816 "iscsi_delete_initiator_group", 00:06:04.816 "iscsi_initiator_group_remove_initiators", 00:06:04.816 "iscsi_initiator_group_add_initiators", 00:06:04.816 "iscsi_create_initiator_group", 00:06:04.816 "iscsi_get_initiator_groups", 00:06:04.816 "nvmf_set_crdt", 00:06:04.816 "nvmf_set_config", 00:06:04.816 "nvmf_set_max_subsystems", 00:06:04.816 "nvmf_stop_mdns_prr", 00:06:04.816 "nvmf_publish_mdns_prr", 00:06:04.816 "nvmf_subsystem_get_listeners", 00:06:04.816 "nvmf_subsystem_get_qpairs", 00:06:04.816 "nvmf_subsystem_get_controllers", 00:06:04.816 "nvmf_get_stats", 00:06:04.816 "nvmf_get_transports", 00:06:04.816 "nvmf_create_transport", 00:06:04.816 "nvmf_get_targets", 00:06:04.816 "nvmf_delete_target", 00:06:04.816 "nvmf_create_target", 00:06:04.816 "nvmf_subsystem_allow_any_host", 00:06:04.816 "nvmf_subsystem_remove_host", 00:06:04.816 "nvmf_subsystem_add_host", 00:06:04.816 "nvmf_ns_remove_host", 00:06:04.816 "nvmf_ns_add_host", 00:06:04.816 "nvmf_subsystem_remove_ns", 00:06:04.816 "nvmf_subsystem_add_ns", 00:06:04.816 "nvmf_subsystem_listener_set_ana_state", 00:06:04.816 "nvmf_discovery_get_referrals", 00:06:04.816 "nvmf_discovery_remove_referral", 00:06:04.816 "nvmf_discovery_add_referral", 00:06:04.816 "nvmf_subsystem_remove_listener", 00:06:04.816 "nvmf_subsystem_add_listener", 00:06:04.816 "nvmf_delete_subsystem", 00:06:04.816 "nvmf_create_subsystem", 00:06:04.816 "nvmf_get_subsystems", 00:06:04.816 "env_dpdk_get_mem_stats", 00:06:04.816 "nbd_get_disks", 00:06:04.816 "nbd_stop_disk", 00:06:04.816 "nbd_start_disk", 00:06:04.816 "ublk_recover_disk", 00:06:04.816 "ublk_get_disks", 00:06:04.816 "ublk_stop_disk", 00:06:04.816 "ublk_start_disk", 00:06:04.816 "ublk_destroy_target", 00:06:04.816 "ublk_create_target", 00:06:04.816 "virtio_blk_create_transport", 00:06:04.816 "virtio_blk_get_transports", 00:06:04.816 "vhost_controller_set_coalescing", 00:06:04.816 "vhost_get_controllers", 00:06:04.816 "vhost_delete_controller", 00:06:04.816 "vhost_create_blk_controller", 00:06:04.816 "vhost_scsi_controller_remove_target", 00:06:04.816 "vhost_scsi_controller_add_target", 00:06:04.816 "vhost_start_scsi_controller", 00:06:04.816 "vhost_create_scsi_controller", 00:06:04.816 "thread_set_cpumask", 00:06:04.816 "framework_get_governor", 00:06:04.816 "framework_get_scheduler", 00:06:04.816 "framework_set_scheduler", 00:06:04.816 "framework_get_reactors", 00:06:04.816 "thread_get_io_channels", 00:06:04.816 "thread_get_pollers", 00:06:04.816 "thread_get_stats", 00:06:04.816 "framework_monitor_context_switch", 00:06:04.816 "spdk_kill_instance", 00:06:04.816 "log_enable_timestamps", 00:06:04.816 "log_get_flags", 00:06:04.816 "log_clear_flag", 00:06:04.816 "log_set_flag", 00:06:04.816 "log_get_level", 00:06:04.816 "log_set_level", 00:06:04.816 "log_get_print_level", 00:06:04.816 "log_set_print_level", 00:06:04.816 "framework_enable_cpumask_locks", 00:06:04.816 "framework_disable_cpumask_locks", 00:06:04.816 "framework_wait_init", 00:06:04.816 "framework_start_init", 00:06:04.816 "scsi_get_devices", 00:06:04.816 "bdev_get_histogram", 00:06:04.816 "bdev_enable_histogram", 00:06:04.816 "bdev_set_qos_limit", 00:06:04.816 "bdev_set_qd_sampling_period", 00:06:04.816 "bdev_get_bdevs", 00:06:04.816 "bdev_reset_iostat", 00:06:04.816 "bdev_get_iostat", 00:06:04.816 "bdev_examine", 00:06:04.816 "bdev_wait_for_examine", 00:06:04.816 "bdev_set_options", 00:06:04.816 "notify_get_notifications", 00:06:04.816 "notify_get_types", 00:06:04.816 "accel_get_stats", 00:06:04.816 "accel_set_options", 00:06:04.816 "accel_set_driver", 00:06:04.816 "accel_crypto_key_destroy", 00:06:04.816 "accel_crypto_keys_get", 00:06:04.816 "accel_crypto_key_create", 00:06:04.816 "accel_assign_opc", 00:06:04.816 "accel_get_module_info", 00:06:04.816 "accel_get_opc_assignments", 00:06:04.816 "vmd_rescan", 00:06:04.816 "vmd_remove_device", 00:06:04.816 "vmd_enable", 00:06:04.816 "sock_get_default_impl", 00:06:04.816 "sock_set_default_impl", 00:06:04.816 "sock_impl_set_options", 00:06:04.816 "sock_impl_get_options", 00:06:04.816 "iobuf_get_stats", 00:06:04.816 "iobuf_set_options", 00:06:04.816 "framework_get_pci_devices", 00:06:04.816 "framework_get_config", 00:06:04.816 "framework_get_subsystems", 00:06:04.816 "trace_get_info", 00:06:04.816 "trace_get_tpoint_group_mask", 00:06:04.816 "trace_disable_tpoint_group", 00:06:04.816 "trace_enable_tpoint_group", 00:06:04.816 "trace_clear_tpoint_mask", 00:06:04.816 "trace_set_tpoint_mask", 00:06:04.816 "keyring_get_keys", 00:06:04.816 "spdk_get_version", 00:06:04.816 "rpc_get_methods" 00:06:04.816 ] 00:06:04.816 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:04.816 13:36:31 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.816 13:36:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.817 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:04.817 13:36:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2353660 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2353660 ']' 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2353660 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2353660 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2353660' 00:06:04.817 killing process with pid 2353660 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2353660 00:06:04.817 13:36:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2353660 00:06:05.384 00:06:05.384 real 0m1.587s 00:06:05.384 user 0m2.812s 00:06:05.384 sys 0m0.538s 00:06:05.384 13:36:31 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.384 13:36:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.384 ************************************ 00:06:05.384 END TEST spdkcli_tcp 00:06:05.384 ************************************ 00:06:05.385 13:36:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.385 13:36:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.385 13:36:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.385 13:36:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.385 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.385 ************************************ 00:06:05.385 START TEST dpdk_mem_utility 00:06:05.385 ************************************ 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.385 * Looking for test storage... 00:06:05.385 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:05.385 13:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:05.385 13:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2354056 00:06:05.385 13:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2354056 00:06:05.385 13:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2354056 ']' 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.385 13:36:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.385 [2024-07-15 13:36:31.850853] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:05.385 [2024-07-15 13:36:31.850929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354056 ] 00:06:05.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.644 [2024-07-15 13:36:31.939054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.644 [2024-07-15 13:36:32.028786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.238 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.238 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:06.238 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:06.238 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:06.238 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.238 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.238 { 00:06:06.238 "filename": "/tmp/spdk_mem_dump.txt" 00:06:06.238 } 00:06:06.238 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.238 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.238 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:06.238 1 heaps totaling size 814.000000 MiB 00:06:06.238 size: 814.000000 MiB heap id: 0 00:06:06.238 end heaps---------- 00:06:06.238 8 mempools totaling size 598.116089 MiB 00:06:06.238 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:06.238 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:06.238 size: 84.521057 MiB name: bdev_io_2354056 00:06:06.238 size: 51.011292 MiB name: evtpool_2354056 00:06:06.238 size: 50.003479 MiB name: msgpool_2354056 00:06:06.238 size: 21.763794 MiB name: PDU_Pool 00:06:06.238 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:06.238 size: 0.026123 MiB name: Session_Pool 00:06:06.238 end mempools------- 00:06:06.238 6 memzones totaling size 4.142822 MiB 00:06:06.238 size: 1.000366 MiB name: RG_ring_0_2354056 00:06:06.238 size: 1.000366 MiB name: RG_ring_1_2354056 00:06:06.238 size: 1.000366 MiB name: RG_ring_4_2354056 00:06:06.238 size: 1.000366 MiB name: RG_ring_5_2354056 00:06:06.238 size: 0.125366 MiB name: RG_ring_2_2354056 00:06:06.238 size: 0.015991 MiB name: RG_ring_3_2354056 00:06:06.238 end memzones------- 00:06:06.238 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:06.238 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:06.238 list of free elements. size: 12.519348 MiB 00:06:06.238 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:06.238 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:06.238 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:06.238 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:06.238 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:06.238 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:06.238 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:06.238 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:06.238 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:06.238 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:06.238 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:06.238 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:06.238 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:06.238 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:06.238 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:06.238 list of standard malloc elements. size: 199.218079 MiB 00:06:06.238 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:06.238 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:06.238 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:06.238 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:06.238 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:06.238 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:06.238 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:06.238 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:06.238 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:06.238 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:06.238 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:06.238 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:06.238 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:06.239 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:06.239 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:06.239 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:06.239 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:06.239 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:06.239 list of memzone associated elements. size: 602.262573 MiB 00:06:06.239 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:06.239 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:06.239 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:06.239 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:06.239 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:06.239 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2354056_0 00:06:06.239 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:06.239 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2354056_0 00:06:06.239 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:06.239 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2354056_0 00:06:06.239 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:06.239 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:06.239 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:06.239 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:06.239 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:06.239 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2354056 00:06:06.239 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:06.239 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2354056 00:06:06.239 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:06.239 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2354056 00:06:06.239 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:06.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:06.239 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:06.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:06.239 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:06.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:06.239 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:06.239 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:06.239 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:06.239 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2354056 00:06:06.239 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:06.239 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2354056 00:06:06.239 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:06.239 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2354056 00:06:06.239 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:06.239 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2354056 00:06:06.239 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:06.239 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2354056 00:06:06.239 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:06.239 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:06.239 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:06.239 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:06.239 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:06.239 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:06.239 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:06.239 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2354056 00:06:06.239 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:06.239 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:06.239 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:06.239 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:06.239 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:06.239 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2354056 00:06:06.239 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:06.239 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:06.239 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:06.239 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2354056 00:06:06.239 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:06.239 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2354056 00:06:06.239 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:06.239 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:06.239 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:06.239 13:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2354056 00:06:06.239 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2354056 ']' 00:06:06.239 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2354056 00:06:06.239 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354056 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354056' 00:06:06.593 killing process with pid 2354056 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2354056 00:06:06.593 13:36:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2354056 00:06:06.852 00:06:06.852 real 0m1.465s 00:06:06.852 user 0m1.465s 00:06:06.852 sys 0m0.480s 00:06:06.852 13:36:33 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.852 13:36:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.852 ************************************ 00:06:06.852 END TEST dpdk_mem_utility 00:06:06.852 ************************************ 00:06:06.852 13:36:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.852 13:36:33 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:06.852 13:36:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.852 13:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.852 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:06:06.852 ************************************ 00:06:06.852 START TEST event 00:06:06.852 ************************************ 00:06:06.852 13:36:33 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:06.852 * Looking for test storage... 00:06:06.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:06.852 13:36:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:06.852 13:36:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.852 13:36:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.852 13:36:33 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:06.852 13:36:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.852 13:36:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.110 ************************************ 00:06:07.110 START TEST event_perf 00:06:07.110 ************************************ 00:06:07.110 13:36:33 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.110 Running I/O for 1 seconds...[2024-07-15 13:36:33.410677] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:07.110 [2024-07-15 13:36:33.410764] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354319 ] 00:06:07.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.110 [2024-07-15 13:36:33.499967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.110 [2024-07-15 13:36:33.586222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.110 [2024-07-15 13:36:33.586325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.111 [2024-07-15 13:36:33.586411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.111 [2024-07-15 13:36:33.586414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.534 Running I/O for 1 seconds... 00:06:08.534 lcore 0: 209938 00:06:08.534 lcore 1: 209937 00:06:08.534 lcore 2: 209938 00:06:08.534 lcore 3: 209938 00:06:08.534 done. 00:06:08.534 00:06:08.534 real 0m1.282s 00:06:08.534 user 0m4.170s 00:06:08.534 sys 0m0.107s 00:06:08.534 13:36:34 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.534 13:36:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.534 ************************************ 00:06:08.534 END TEST event_perf 00:06:08.534 ************************************ 00:06:08.534 13:36:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.534 13:36:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.534 13:36:34 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.534 13:36:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.534 13:36:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.534 ************************************ 00:06:08.534 START TEST event_reactor 00:06:08.534 ************************************ 00:06:08.534 13:36:34 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:08.534 [2024-07-15 13:36:34.777163] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:08.534 [2024-07-15 13:36:34.777231] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354520 ] 00:06:08.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.534 [2024-07-15 13:36:34.865147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.534 [2024-07-15 13:36:34.955853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.911 test_start 00:06:09.911 oneshot 00:06:09.911 tick 100 00:06:09.911 tick 100 00:06:09.911 tick 250 00:06:09.911 tick 100 00:06:09.911 tick 100 00:06:09.911 tick 100 00:06:09.911 tick 250 00:06:09.911 tick 500 00:06:09.911 tick 100 00:06:09.911 tick 100 00:06:09.911 tick 250 00:06:09.911 tick 100 00:06:09.911 tick 100 00:06:09.911 test_end 00:06:09.911 00:06:09.911 real 0m1.281s 00:06:09.911 user 0m1.155s 00:06:09.911 sys 0m0.121s 00:06:09.911 13:36:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.911 13:36:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.911 ************************************ 00:06:09.911 END TEST event_reactor 00:06:09.911 ************************************ 00:06:09.911 13:36:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.911 13:36:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.911 13:36:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.911 13:36:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.911 13:36:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.911 ************************************ 00:06:09.911 START TEST event_reactor_perf 00:06:09.911 ************************************ 00:06:09.911 13:36:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.911 [2024-07-15 13:36:36.141614] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:09.911 [2024-07-15 13:36:36.141675] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354721 ] 00:06:09.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.911 [2024-07-15 13:36:36.227374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.911 [2024-07-15 13:36:36.308885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.292 test_start 00:06:11.292 test_end 00:06:11.292 Performance: 517831 events per second 00:06:11.292 00:06:11.292 real 0m1.269s 00:06:11.292 user 0m1.163s 00:06:11.292 sys 0m0.101s 00:06:11.292 13:36:37 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.292 13:36:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 END TEST event_reactor_perf 00:06:11.292 ************************************ 00:06:11.292 13:36:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.292 13:36:37 event -- event/event.sh@49 -- # uname -s 00:06:11.292 13:36:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.292 13:36:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.292 13:36:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.292 13:36:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.292 13:36:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 START TEST event_scheduler 00:06:11.292 ************************************ 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:11.292 * Looking for test storage... 00:06:11.292 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:11.292 13:36:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.292 13:36:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2354957 00:06:11.292 13:36:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.292 13:36:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.292 13:36:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2354957 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2354957 ']' 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.292 13:36:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 [2024-07-15 13:36:37.622475] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:11.292 [2024-07-15 13:36:37.622547] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354957 ] 00:06:11.292 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.292 [2024-07-15 13:36:37.690196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.292 [2024-07-15 13:36:37.783117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.292 [2024-07-15 13:36:37.783217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.292 [2024-07-15 13:36:37.783301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.292 [2024-07-15 13:36:37.783304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:12.230 13:36:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 [2024-07-15 13:36:38.453791] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:12.230 [2024-07-15 13:36:38.453810] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.230 [2024-07-15 13:36:38.453820] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.230 [2024-07-15 13:36:38.453828] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.230 [2024-07-15 13:36:38.453836] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 [2024-07-15 13:36:38.531859] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 ************************************ 00:06:12.230 START TEST scheduler_create_thread 00:06:12.230 ************************************ 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 2 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 3 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 4 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 5 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 6 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.230 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 7 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.231 8 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.231 9 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.231 10 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.231 13:36:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.799 13:36:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.799 13:36:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.799 13:36:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.799 13:36:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.181 13:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.181 13:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:14.181 13:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:14.181 13:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.181 13:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.559 13:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.559 00:06:15.559 real 0m3.101s 00:06:15.559 user 0m0.023s 00:06:15.559 sys 0m0.008s 00:06:15.559 13:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.559 13:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.559 ************************************ 00:06:15.559 END TEST scheduler_create_thread 00:06:15.559 ************************************ 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:15.559 13:36:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.559 13:36:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2354957 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2354957 ']' 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2354957 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2354957 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2354957' 00:06:15.559 killing process with pid 2354957 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2354957 00:06:15.559 13:36:41 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2354957 00:06:15.559 [2024-07-15 13:36:42.055527] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.819 00:06:15.819 real 0m4.825s 00:06:15.819 user 0m9.332s 00:06:15.819 sys 0m0.445s 00:06:15.819 13:36:42 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.819 13:36:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.819 ************************************ 00:06:15.819 END TEST event_scheduler 00:06:15.819 ************************************ 00:06:15.819 13:36:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.819 13:36:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:16.078 13:36:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:16.078 13:36:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.078 13:36:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.078 13:36:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.078 ************************************ 00:06:16.078 START TEST app_repeat 00:06:16.078 ************************************ 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2355560 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2355560' 00:06:16.078 Process app_repeat pid: 2355560 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:16.078 spdk_app_start Round 0 00:06:16.078 13:36:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2355560 /var/tmp/spdk-nbd.sock 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2355560 ']' 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.078 13:36:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.078 [2024-07-15 13:36:42.428370] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:16.078 [2024-07-15 13:36:42.428430] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355560 ] 00:06:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.078 [2024-07-15 13:36:42.516994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.338 [2024-07-15 13:36:42.610960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.338 [2024-07-15 13:36:42.610961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.913 13:36:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.913 13:36:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.913 13:36:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.913 Malloc0 00:06:17.175 13:36:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.175 Malloc1 00:06:17.175 13:36:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.175 13:36:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.434 /dev/nbd0 00:06:17.434 13:36:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.434 13:36:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.434 1+0 records in 00:06:17.434 1+0 records out 00:06:17.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000110315 s, 37.1 MB/s 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.434 13:36:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.434 13:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.434 13:36:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.434 13:36:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.693 /dev/nbd1 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.693 1+0 records in 00:06:17.693 1+0 records out 00:06:17.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266415 s, 15.4 MB/s 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.693 13:36:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.693 13:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.951 { 00:06:17.951 "nbd_device": "/dev/nbd0", 00:06:17.951 "bdev_name": "Malloc0" 00:06:17.951 }, 00:06:17.951 { 00:06:17.951 "nbd_device": "/dev/nbd1", 00:06:17.951 "bdev_name": "Malloc1" 00:06:17.951 } 00:06:17.951 ]' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.951 { 00:06:17.951 "nbd_device": "/dev/nbd0", 00:06:17.951 "bdev_name": "Malloc0" 00:06:17.951 }, 00:06:17.951 { 00:06:17.951 "nbd_device": "/dev/nbd1", 00:06:17.951 "bdev_name": "Malloc1" 00:06:17.951 } 00:06:17.951 ]' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.951 /dev/nbd1' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.951 /dev/nbd1' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.951 256+0 records in 00:06:17.951 256+0 records out 00:06:17.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114732 s, 91.4 MB/s 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.951 256+0 records in 00:06:17.951 256+0 records out 00:06:17.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204084 s, 51.4 MB/s 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.951 256+0 records in 00:06:17.951 256+0 records out 00:06:17.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221002 s, 47.4 MB/s 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.951 13:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.952 13:36:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.210 13:36:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.469 13:36:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.727 13:36:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.728 13:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.728 13:36:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.728 13:36:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.728 13:36:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.987 13:36:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.987 [2024-07-15 13:36:45.488293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.246 [2024-07-15 13:36:45.578247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.247 [2024-07-15 13:36:45.578248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.247 [2024-07-15 13:36:45.625782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.247 [2024-07-15 13:36:45.625830] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.784 13:36:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.784 13:36:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.784 spdk_app_start Round 1 00:06:21.784 13:36:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2355560 /var/tmp/spdk-nbd.sock 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2355560 ']' 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.784 13:36:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.044 13:36:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.044 13:36:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:22.044 13:36:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.303 Malloc0 00:06:22.303 13:36:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.563 Malloc1 00:06:22.563 13:36:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.563 13:36:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.563 /dev/nbd0 00:06:22.563 13:36:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.563 13:36:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.563 1+0 records in 00:06:22.563 1+0 records out 00:06:22.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248844 s, 16.5 MB/s 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.563 13:36:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.823 /dev/nbd1 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.823 1+0 records in 00:06:22.823 1+0 records out 00:06:22.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261533 s, 15.7 MB/s 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.823 13:36:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.823 13:36:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.082 { 00:06:23.082 "nbd_device": "/dev/nbd0", 00:06:23.082 "bdev_name": "Malloc0" 00:06:23.082 }, 00:06:23.082 { 00:06:23.082 "nbd_device": "/dev/nbd1", 00:06:23.082 "bdev_name": "Malloc1" 00:06:23.082 } 00:06:23.082 ]' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.082 { 00:06:23.082 "nbd_device": "/dev/nbd0", 00:06:23.082 "bdev_name": "Malloc0" 00:06:23.082 }, 00:06:23.082 { 00:06:23.082 "nbd_device": "/dev/nbd1", 00:06:23.082 "bdev_name": "Malloc1" 00:06:23.082 } 00:06:23.082 ]' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.082 /dev/nbd1' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.082 /dev/nbd1' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.082 256+0 records in 00:06:23.082 256+0 records out 00:06:23.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353243 s, 297 MB/s 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.082 256+0 records in 00:06:23.082 256+0 records out 00:06:23.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201413 s, 52.1 MB/s 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.082 256+0 records in 00:06:23.082 256+0 records out 00:06:23.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222195 s, 47.2 MB/s 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.082 13:36:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.342 13:36:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.601 13:36:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.859 13:36:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.859 13:36:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.117 13:36:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.376 [2024-07-15 13:36:50.663157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.376 [2024-07-15 13:36:50.745623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.376 [2024-07-15 13:36:50.745624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.376 [2024-07-15 13:36:50.794158] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.376 [2024-07-15 13:36:50.794205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.662 13:36:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.662 13:36:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.662 spdk_app_start Round 2 00:06:27.662 13:36:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2355560 /var/tmp/spdk-nbd.sock 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2355560 ']' 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.662 13:36:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:27.662 13:36:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.662 Malloc0 00:06:27.662 13:36:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.662 Malloc1 00:06:27.662 13:36:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.662 13:36:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.663 13:36:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.922 /dev/nbd0 00:06:27.922 13:36:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.922 13:36:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.922 1+0 records in 00:06:27.922 1+0 records out 00:06:27.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022556 s, 18.2 MB/s 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.922 13:36:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.922 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.922 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.922 13:36:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.181 /dev/nbd1 00:06:28.181 13:36:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.181 13:36:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.181 13:36:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.181 1+0 records in 00:06:28.181 1+0 records out 00:06:28.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183403 s, 22.3 MB/s 00:06:28.182 13:36:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.182 13:36:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:28.182 13:36:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.182 13:36:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.182 13:36:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.182 { 00:06:28.182 "nbd_device": "/dev/nbd0", 00:06:28.182 "bdev_name": "Malloc0" 00:06:28.182 }, 00:06:28.182 { 00:06:28.182 "nbd_device": "/dev/nbd1", 00:06:28.182 "bdev_name": "Malloc1" 00:06:28.182 } 00:06:28.182 ]' 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.182 { 00:06:28.182 "nbd_device": "/dev/nbd0", 00:06:28.182 "bdev_name": "Malloc0" 00:06:28.182 }, 00:06:28.182 { 00:06:28.182 "nbd_device": "/dev/nbd1", 00:06:28.182 "bdev_name": "Malloc1" 00:06:28.182 } 00:06:28.182 ]' 00:06:28.182 13:36:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.441 /dev/nbd1' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.441 /dev/nbd1' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.441 256+0 records in 00:06:28.441 256+0 records out 00:06:28.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102099 s, 103 MB/s 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.441 256+0 records in 00:06:28.441 256+0 records out 00:06:28.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201855 s, 51.9 MB/s 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.441 256+0 records in 00:06:28.441 256+0 records out 00:06:28.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217029 s, 48.3 MB/s 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.441 13:36:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.442 13:36:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.701 13:36:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.701 13:36:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.701 13:36:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.701 13:36:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.702 13:36:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.962 13:36:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.962 13:36:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.274 13:36:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.533 [2024-07-15 13:36:55.844913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.533 [2024-07-15 13:36:55.928100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.533 [2024-07-15 13:36:55.928100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.533 [2024-07-15 13:36:55.973752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.533 [2024-07-15 13:36:55.973795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.825 13:36:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2355560 /var/tmp/spdk-nbd.sock 00:06:32.825 13:36:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2355560 ']' 00:06:32.825 13:36:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:32.826 13:36:58 event.app_repeat -- event/event.sh@39 -- # killprocess 2355560 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2355560 ']' 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2355560 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2355560 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2355560' 00:06:32.826 killing process with pid 2355560 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2355560 00:06:32.826 13:36:58 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2355560 00:06:32.826 spdk_app_start is called in Round 0. 00:06:32.826 Shutdown signal received, stop current app iteration 00:06:32.826 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:32.826 spdk_app_start is called in Round 1. 00:06:32.826 Shutdown signal received, stop current app iteration 00:06:32.826 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:32.826 spdk_app_start is called in Round 2. 00:06:32.826 Shutdown signal received, stop current app iteration 00:06:32.826 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 reinitialization... 00:06:32.826 spdk_app_start is called in Round 3. 00:06:32.826 Shutdown signal received, stop current app iteration 00:06:32.826 13:36:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:32.826 13:36:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:32.826 00:06:32.826 real 0m16.683s 00:06:32.826 user 0m35.387s 00:06:32.826 sys 0m3.225s 00:06:32.826 13:36:59 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.826 13:36:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.826 ************************************ 00:06:32.826 END TEST app_repeat 00:06:32.826 ************************************ 00:06:32.826 13:36:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.826 13:36:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:32.826 13:36:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.826 13:36:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.826 13:36:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.826 13:36:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.826 ************************************ 00:06:32.826 START TEST cpu_locks 00:06:32.826 ************************************ 00:06:32.826 13:36:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:32.826 * Looking for test storage... 00:06:32.826 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:32.826 13:36:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:32.826 13:36:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:32.826 13:36:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:32.826 13:36:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:32.826 13:36:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.826 13:36:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.826 13:36:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.826 ************************************ 00:06:32.826 START TEST default_locks 00:06:32.826 ************************************ 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2358135 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2358135 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2358135 ']' 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.826 13:36:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.085 [2024-07-15 13:36:59.374300] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:33.085 [2024-07-15 13:36:59.374361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358135 ] 00:06:33.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.085 [2024-07-15 13:36:59.461281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.085 [2024-07-15 13:36:59.550824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.023 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.023 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:34.023 13:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2358135 00:06:34.023 13:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2358135 00:06:34.023 13:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.282 lslocks: write error 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2358135 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2358135 ']' 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2358135 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2358135 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2358135' 00:06:34.282 killing process with pid 2358135 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2358135 00:06:34.282 13:37:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2358135 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2358135 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2358135 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2358135 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2358135 ']' 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.543 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2358135) - No such process 00:06:34.543 ERROR: process (pid: 2358135) is no longer running 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.543 00:06:34.543 real 0m1.727s 00:06:34.543 user 0m1.794s 00:06:34.543 sys 0m0.664s 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.543 13:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.543 ************************************ 00:06:34.543 END TEST default_locks 00:06:34.543 ************************************ 00:06:34.803 13:37:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.803 13:37:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.803 13:37:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.803 13:37:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.803 13:37:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 ************************************ 00:06:34.803 START TEST default_locks_via_rpc 00:06:34.803 ************************************ 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2358369 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2358369 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2358369 ']' 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.803 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.803 [2024-07-15 13:37:01.181039] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:34.803 [2024-07-15 13:37:01.181095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358369 ] 00:06:34.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.803 [2024-07-15 13:37:01.267857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.062 [2024-07-15 13:37:01.358481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.631 13:37:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.631 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.631 13:37:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2358369 00:06:35.631 13:37:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2358369 00:06:35.631 13:37:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2358369 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2358369 ']' 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2358369 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2358369 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2358369' 00:06:36.200 killing process with pid 2358369 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2358369 00:06:36.200 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2358369 00:06:36.459 00:06:36.459 real 0m1.701s 00:06:36.459 user 0m1.734s 00:06:36.459 sys 0m0.623s 00:06:36.459 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.459 13:37:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.459 ************************************ 00:06:36.459 END TEST default_locks_via_rpc 00:06:36.459 ************************************ 00:06:36.459 13:37:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:36.459 13:37:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.459 13:37:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.459 13:37:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.459 13:37:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.459 ************************************ 00:06:36.459 START TEST non_locking_app_on_locked_coremask 00:06:36.459 ************************************ 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2358583 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2358583 /var/tmp/spdk.sock 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2358583 ']' 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.459 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.460 13:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.460 [2024-07-15 13:37:02.963821] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:36.460 [2024-07-15 13:37:02.963879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358583 ] 00:06:36.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.719 [2024-07-15 13:37:03.033807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.719 [2024-07-15 13:37:03.123810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2358761 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2358761 /var/tmp/spdk2.sock 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2358761 ']' 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.288 13:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.548 [2024-07-15 13:37:03.820715] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:37.548 [2024-07-15 13:37:03.820775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358761 ] 00:06:37.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.548 [2024-07-15 13:37:03.917273] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.548 [2024-07-15 13:37:03.917298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.808 [2024-07-15 13:37:04.083358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.377 13:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.377 13:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.377 13:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2358583 00:06:38.377 13:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.377 13:37:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2358583 00:06:39.757 lslocks: write error 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2358583 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2358583 ']' 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2358583 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.757 13:37:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2358583 00:06:39.757 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.757 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.757 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2358583' 00:06:39.757 killing process with pid 2358583 00:06:39.757 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2358583 00:06:39.757 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2358583 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2358761 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2358761 ']' 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2358761 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2358761 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2358761' 00:06:40.324 killing process with pid 2358761 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2358761 00:06:40.324 13:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2358761 00:06:40.892 00:06:40.892 real 0m4.200s 00:06:40.892 user 0m4.446s 00:06:40.892 sys 0m1.427s 00:06:40.892 13:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.892 13:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.892 ************************************ 00:06:40.892 END TEST non_locking_app_on_locked_coremask 00:06:40.892 ************************************ 00:06:40.892 13:37:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.892 13:37:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:40.892 13:37:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.893 13:37:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.893 13:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.893 ************************************ 00:06:40.893 START TEST locking_app_on_unlocked_coremask 00:06:40.893 ************************************ 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2359172 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2359172 /var/tmp/spdk.sock 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2359172 ']' 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.893 13:37:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.893 [2024-07-15 13:37:07.241994] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:40.893 [2024-07-15 13:37:07.242047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359172 ] 00:06:40.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.893 [2024-07-15 13:37:07.327275] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.893 [2024-07-15 13:37:07.327310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.893 [2024-07-15 13:37:07.415854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2359350 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2359350 /var/tmp/spdk2.sock 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2359350 ']' 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.830 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.830 [2024-07-15 13:37:08.084239] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:41.830 [2024-07-15 13:37:08.084298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359350 ] 00:06:41.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.830 [2024-07-15 13:37:08.178891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.830 [2024-07-15 13:37:08.344692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.400 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.400 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.400 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2359350 00:06:42.400 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2359350 00:06:42.400 13:37:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.780 lslocks: write error 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2359172 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2359172 ']' 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2359172 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359172 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359172' 00:06:43.780 killing process with pid 2359172 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2359172 00:06:43.780 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2359172 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2359350 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2359350 ']' 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2359350 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.348 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359350 00:06:44.607 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.607 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.607 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359350' 00:06:44.607 killing process with pid 2359350 00:06:44.607 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2359350 00:06:44.607 13:37:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2359350 00:06:44.867 00:06:44.867 real 0m4.041s 00:06:44.867 user 0m4.270s 00:06:44.867 sys 0m1.366s 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 ************************************ 00:06:44.867 END TEST locking_app_on_unlocked_coremask 00:06:44.867 ************************************ 00:06:44.867 13:37:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.867 13:37:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.867 13:37:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.867 13:37:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.867 13:37:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 ************************************ 00:06:44.867 START TEST locking_app_on_locked_coremask 00:06:44.867 ************************************ 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2359756 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2359756 /var/tmp/spdk.sock 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2359756 ']' 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.867 13:37:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.867 [2024-07-15 13:37:11.373103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:44.867 [2024-07-15 13:37:11.373165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359756 ] 00:06:45.126 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.126 [2024-07-15 13:37:11.460195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.126 [2024-07-15 13:37:11.550630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2359937 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2359937 /var/tmp/spdk2.sock 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2359937 /var/tmp/spdk2.sock 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2359937 /var/tmp/spdk2.sock 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2359937 ']' 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.695 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.955 [2024-07-15 13:37:12.242298] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:45.955 [2024-07-15 13:37:12.242359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359937 ] 00:06:45.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.955 [2024-07-15 13:37:12.337781] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2359756 has claimed it. 00:06:45.955 [2024-07-15 13:37:12.337820] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.523 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2359937) - No such process 00:06:46.523 ERROR: process (pid: 2359937) is no longer running 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2359756 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2359756 00:06:46.523 13:37:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.091 lslocks: write error 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2359756 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2359756 ']' 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2359756 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2359756 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2359756' 00:06:47.091 killing process with pid 2359756 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2359756 00:06:47.091 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2359756 00:06:47.659 00:06:47.659 real 0m2.628s 00:06:47.659 user 0m2.858s 00:06:47.659 sys 0m0.806s 00:06:47.659 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.659 13:37:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.659 ************************************ 00:06:47.659 END TEST locking_app_on_locked_coremask 00:06:47.659 ************************************ 00:06:47.659 13:37:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.659 13:37:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.659 13:37:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.659 13:37:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.659 13:37:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.659 ************************************ 00:06:47.659 START TEST locking_overlapped_coremask 00:06:47.659 ************************************ 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2360151 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2360151 /var/tmp/spdk.sock 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2360151 ']' 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.659 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.659 [2024-07-15 13:37:14.088000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:47.659 [2024-07-15 13:37:14.088065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360151 ] 00:06:47.659 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.659 [2024-07-15 13:37:14.177519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.918 [2024-07-15 13:37:14.268345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.918 [2024-07-15 13:37:14.268467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.918 [2024-07-15 13:37:14.268467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2360335 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2360335 /var/tmp/spdk2.sock 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2360335 /var/tmp/spdk2.sock 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2360335 /var/tmp/spdk2.sock 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2360335 ']' 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.485 13:37:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.485 [2024-07-15 13:37:14.938326] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:48.485 [2024-07-15 13:37:14.938382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360335 ] 00:06:48.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.743 [2024-07-15 13:37:15.039188] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2360151 has claimed it. 00:06:48.743 [2024-07-15 13:37:15.039232] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2360335) - No such process 00:06:49.310 ERROR: process (pid: 2360335) is no longer running 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2360151 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2360151 ']' 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2360151 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360151 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360151' 00:06:49.310 killing process with pid 2360151 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2360151 00:06:49.310 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2360151 00:06:49.568 00:06:49.568 real 0m1.945s 00:06:49.568 user 0m5.279s 00:06:49.568 sys 0m0.516s 00:06:49.568 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.568 13:37:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.568 ************************************ 00:06:49.568 END TEST locking_overlapped_coremask 00:06:49.568 ************************************ 00:06:49.568 13:37:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.568 13:37:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.568 13:37:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.568 13:37:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.568 13:37:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.568 ************************************ 00:06:49.568 START TEST locking_overlapped_coremask_via_rpc 00:06:49.568 ************************************ 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2360545 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2360545 /var/tmp/spdk.sock 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2360545 ']' 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.568 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.827 [2024-07-15 13:37:16.117019] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:49.827 [2024-07-15 13:37:16.117066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360545 ] 00:06:49.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.827 [2024-07-15 13:37:16.201850] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.827 [2024-07-15 13:37:16.201872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.827 [2024-07-15 13:37:16.285624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.827 [2024-07-15 13:37:16.285670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.827 [2024-07-15 13:37:16.285670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2360575 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2360575 /var/tmp/spdk2.sock 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2360575 ']' 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.451 13:37:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.710 [2024-07-15 13:37:16.979111] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:50.710 [2024-07-15 13:37:16.979175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360575 ] 00:06:50.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.710 [2024-07-15 13:37:17.084646] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.710 [2024-07-15 13:37:17.084674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.968 [2024-07-15 13:37:17.252226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.968 [2024-07-15 13:37:17.252338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.968 [2024-07-15 13:37:17.252339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 [2024-07-15 13:37:17.833633] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2360545 has claimed it. 00:06:51.536 request: 00:06:51.536 { 00:06:51.536 "method": "framework_enable_cpumask_locks", 00:06:51.536 "req_id": 1 00:06:51.536 } 00:06:51.536 Got JSON-RPC error response 00:06:51.536 response: 00:06:51.536 { 00:06:51.536 "code": -32603, 00:06:51.536 "message": "Failed to claim CPU core: 2" 00:06:51.536 } 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2360545 /var/tmp/spdk.sock 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2360545 ']' 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.536 13:37:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2360575 /var/tmp/spdk2.sock 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2360575 ']' 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.536 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.795 00:06:51.795 real 0m2.146s 00:06:51.795 user 0m0.858s 00:06:51.795 sys 0m0.225s 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.795 13:37:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.795 ************************************ 00:06:51.795 END TEST locking_overlapped_coremask_via_rpc 00:06:51.795 ************************************ 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.795 13:37:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.795 13:37:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2360545 ]] 00:06:51.795 13:37:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2360545 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2360545 ']' 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2360545 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360545 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360545' 00:06:51.795 killing process with pid 2360545 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2360545 00:06:51.795 13:37:18 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2360545 00:06:52.363 13:37:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2360575 ]] 00:06:52.363 13:37:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2360575 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2360575 ']' 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2360575 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2360575 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2360575' 00:06:52.364 killing process with pid 2360575 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2360575 00:06:52.364 13:37:18 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2360575 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2360545 ]] 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2360545 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2360545 ']' 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2360545 00:06:52.623 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2360545) - No such process 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2360545 is not found' 00:06:52.623 Process with pid 2360545 is not found 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2360575 ]] 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2360575 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2360575 ']' 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2360575 00:06:52.623 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2360575) - No such process 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2360575 is not found' 00:06:52.623 Process with pid 2360575 is not found 00:06:52.623 13:37:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.623 00:06:52.623 real 0m19.873s 00:06:52.623 user 0m31.890s 00:06:52.623 sys 0m6.774s 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.623 13:37:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.623 ************************************ 00:06:52.623 END TEST cpu_locks 00:06:52.623 ************************************ 00:06:52.623 13:37:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.623 00:06:52.623 real 0m45.847s 00:06:52.623 user 1m23.336s 00:06:52.623 sys 0m11.217s 00:06:52.623 13:37:19 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.623 13:37:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.623 ************************************ 00:06:52.623 END TEST event 00:06:52.623 ************************************ 00:06:52.623 13:37:19 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.623 13:37:19 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:52.623 13:37:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.623 13:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.623 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 ************************************ 00:06:52.883 START TEST thread 00:06:52.883 ************************************ 00:06:52.883 13:37:19 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:52.883 * Looking for test storage... 00:06:52.883 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:52.883 13:37:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.883 13:37:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:52.883 13:37:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.883 13:37:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 ************************************ 00:06:52.883 START TEST thread_poller_perf 00:06:52.883 ************************************ 00:06:52.883 13:37:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.883 [2024-07-15 13:37:19.344531] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:52.883 [2024-07-15 13:37:19.344624] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361026 ] 00:06:52.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.142 [2024-07-15 13:37:19.420039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.142 [2024-07-15 13:37:19.503916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.142 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.080 ====================================== 00:06:54.080 busy:2308426836 (cyc) 00:06:54.080 total_run_count: 407000 00:06:54.080 tsc_hz: 2300000000 (cyc) 00:06:54.080 ====================================== 00:06:54.080 poller_cost: 5671 (cyc), 2465 (nsec) 00:06:54.080 00:06:54.080 real 0m1.271s 00:06:54.080 user 0m1.169s 00:06:54.080 sys 0m0.094s 00:06:54.080 13:37:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.080 13:37:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.080 ************************************ 00:06:54.080 END TEST thread_poller_perf 00:06:54.080 ************************************ 00:06:54.368 13:37:20 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:54.368 13:37:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.368 13:37:20 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:54.368 13:37:20 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.368 13:37:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.368 ************************************ 00:06:54.368 START TEST thread_poller_perf 00:06:54.368 ************************************ 00:06:54.368 13:37:20 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.368 [2024-07-15 13:37:20.702291] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:54.368 [2024-07-15 13:37:20.702356] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361225 ] 00:06:54.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.368 [2024-07-15 13:37:20.790259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.368 [2024-07-15 13:37:20.881562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.368 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.753 ====================================== 00:06:55.753 busy:2301572930 (cyc) 00:06:55.753 total_run_count: 5438000 00:06:55.753 tsc_hz: 2300000000 (cyc) 00:06:55.753 ====================================== 00:06:55.753 poller_cost: 423 (cyc), 183 (nsec) 00:06:55.753 00:06:55.753 real 0m1.284s 00:06:55.753 user 0m1.167s 00:06:55.753 sys 0m0.111s 00:06:55.753 13:37:21 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.753 13:37:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.753 END TEST thread_poller_perf 00:06:55.753 ************************************ 00:06:55.753 13:37:22 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.753 13:37:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.753 00:06:55.753 real 0m2.839s 00:06:55.753 user 0m2.433s 00:06:55.753 sys 0m0.414s 00:06:55.753 13:37:22 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.753 13:37:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.753 END TEST thread 00:06:55.753 ************************************ 00:06:55.753 13:37:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.753 13:37:22 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:55.753 13:37:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.753 13:37:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.753 13:37:22 -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 ************************************ 00:06:55.753 START TEST accel 00:06:55.753 ************************************ 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:55.753 * Looking for test storage... 00:06:55.753 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:55.753 13:37:22 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:55.753 13:37:22 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:55.753 13:37:22 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.753 13:37:22 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2361468 00:06:55.753 13:37:22 accel -- accel/accel.sh@63 -- # waitforlisten 2361468 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@829 -- # '[' -z 2361468 ']' 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.753 13:37:22 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.753 13:37:22 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.753 13:37:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.753 13:37:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.753 13:37:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.753 13:37:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.753 13:37:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.753 13:37:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.753 13:37:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:55.753 13:37:22 accel -- accel/accel.sh@41 -- # jq -r . 00:06:55.753 [2024-07-15 13:37:22.250755] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:55.753 [2024-07-15 13:37:22.250813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361468 ] 00:06:56.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.012 [2024-07-15 13:37:22.320221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.012 [2024-07-15 13:37:22.410289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.580 13:37:23 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.580 13:37:23 accel -- common/autotest_common.sh@862 -- # return 0 00:06:56.580 13:37:23 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:56.580 13:37:23 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:56.580 13:37:23 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:56.580 13:37:23 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:56.580 13:37:23 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.580 13:37:23 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:56.580 13:37:23 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.580 13:37:23 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.580 13:37:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.580 13:37:23 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.580 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.580 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.580 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.580 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.580 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.580 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.580 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.581 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.581 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.581 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # IFS== 00:06:56.838 13:37:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:56.838 13:37:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:56.838 13:37:23 accel -- accel/accel.sh@75 -- # killprocess 2361468 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@948 -- # '[' -z 2361468 ']' 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@952 -- # kill -0 2361468 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@953 -- # uname 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2361468 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2361468' 00:06:56.838 killing process with pid 2361468 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@967 -- # kill 2361468 00:06:56.838 13:37:23 accel -- common/autotest_common.sh@972 -- # wait 2361468 00:06:57.095 13:37:23 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:57.095 13:37:23 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:57.095 13:37:23 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.095 13:37:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.095 13:37:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.095 13:37:23 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:57.095 13:37:23 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:57.095 13:37:23 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.095 13:37:23 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:57.095 13:37:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.353 13:37:23 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.354 13:37:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.354 13:37:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.354 13:37:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.354 ************************************ 00:06:57.354 START TEST accel_missing_filename 00:06:57.354 ************************************ 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.354 13:37:23 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:57.354 13:37:23 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:57.354 [2024-07-15 13:37:23.690324] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.354 [2024-07-15 13:37:23.690393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361694 ] 00:06:57.354 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.354 [2024-07-15 13:37:23.778824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.354 [2024-07-15 13:37:23.869450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.613 [2024-07-15 13:37:23.917418] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.613 [2024-07-15 13:37:23.987307] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:57.613 A filename is required. 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.613 00:06:57.613 real 0m0.411s 00:06:57.613 user 0m0.281s 00:06:57.613 sys 0m0.164s 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.613 13:37:24 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:57.613 ************************************ 00:06:57.613 END TEST accel_missing_filename 00:06:57.613 ************************************ 00:06:57.613 13:37:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.613 13:37:24 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.613 13:37:24 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:57.613 13:37:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.613 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 ************************************ 00:06:57.873 START TEST accel_compress_verify 00:06:57.873 ************************************ 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.873 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:57.873 13:37:24 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:57.873 [2024-07-15 13:37:24.184117] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:57.873 [2024-07-15 13:37:24.184182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361878 ] 00:06:57.873 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.873 [2024-07-15 13:37:24.272620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.873 [2024-07-15 13:37:24.363832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.133 [2024-07-15 13:37:24.412075] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.133 [2024-07-15 13:37:24.481299] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.133 00:06:58.133 Compression does not support the verify option, aborting. 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.133 00:06:58.133 real 0m0.411s 00:06:58.133 user 0m0.288s 00:06:58.133 sys 0m0.160s 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.133 13:37:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.133 ************************************ 00:06:58.133 END TEST accel_compress_verify 00:06:58.133 ************************************ 00:06:58.133 13:37:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.133 13:37:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.133 13:37:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.133 13:37:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.133 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.133 ************************************ 00:06:58.133 START TEST accel_wrong_workload 00:06:58.133 ************************************ 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.133 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:58.133 13:37:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:58.393 Unsupported workload type: foobar 00:06:58.393 [2024-07-15 13:37:24.679912] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.393 accel_perf options: 00:06:58.393 [-h help message] 00:06:58.393 [-q queue depth per core] 00:06:58.393 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.393 [-T number of threads per core 00:06:58.393 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.393 [-t time in seconds] 00:06:58.393 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.393 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.393 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.393 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.393 [-S for crc32c workload, use this seed value (default 0) 00:06:58.393 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.393 [-f for fill workload, use this BYTE value (default 255) 00:06:58.393 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.393 [-y verify result if this switch is on] 00:06:58.393 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.393 Can be used to spread operations across a wider range of memory. 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.393 00:06:58.393 real 0m0.039s 00:06:58.393 user 0m0.019s 00:06:58.393 sys 0m0.019s 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.393 13:37:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 END TEST accel_wrong_workload 00:06:58.393 ************************************ 00:06:58.393 Error: writing output failed: Broken pipe 00:06:58.393 13:37:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.393 13:37:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.393 13:37:24 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.393 13:37:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.393 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 START TEST accel_negative_buffers 00:06:58.393 ************************************ 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.393 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:58.393 13:37:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:58.393 -x option must be non-negative. 00:06:58.393 [2024-07-15 13:37:24.800943] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:58.393 accel_perf options: 00:06:58.393 [-h help message] 00:06:58.393 [-q queue depth per core] 00:06:58.393 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.393 [-T number of threads per core 00:06:58.394 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.394 [-t time in seconds] 00:06:58.394 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.394 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.394 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.394 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.394 [-S for crc32c workload, use this seed value (default 0) 00:06:58.394 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.394 [-f for fill workload, use this BYTE value (default 255) 00:06:58.394 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.394 [-y verify result if this switch is on] 00:06:58.394 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.394 Can be used to spread operations across a wider range of memory. 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.394 00:06:58.394 real 0m0.039s 00:06:58.394 user 0m0.019s 00:06:58.394 sys 0m0.020s 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.394 13:37:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:58.394 ************************************ 00:06:58.394 END TEST accel_negative_buffers 00:06:58.394 ************************************ 00:06:58.394 Error: writing output failed: Broken pipe 00:06:58.394 13:37:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.394 13:37:24 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:58.394 13:37:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.394 13:37:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.394 13:37:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.394 ************************************ 00:06:58.394 START TEST accel_crc32c 00:06:58.394 ************************************ 00:06:58.394 13:37:24 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:58.394 13:37:24 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:58.394 [2024-07-15 13:37:24.911367] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:06:58.394 [2024-07-15 13:37:24.911435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361952 ] 00:06:58.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.654 [2024-07-15 13:37:24.997830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.654 [2024-07-15 13:37:25.083572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.654 13:37:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.034 13:37:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.034 00:07:00.034 real 0m1.404s 00:07:00.034 user 0m1.258s 00:07:00.034 sys 0m0.160s 00:07:00.034 13:37:26 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.034 13:37:26 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.034 ************************************ 00:07:00.034 END TEST accel_crc32c 00:07:00.034 ************************************ 00:07:00.034 13:37:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.034 13:37:26 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.034 13:37:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.034 13:37:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.034 13:37:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.034 ************************************ 00:07:00.034 START TEST accel_crc32c_C2 00:07:00.034 ************************************ 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.034 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.034 [2024-07-15 13:37:26.400597] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:00.034 [2024-07-15 13:37:26.400654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362150 ] 00:07:00.034 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.034 [2024-07-15 13:37:26.468761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.034 [2024-07-15 13:37:26.553572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.294 13:37:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.231 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.489 00:07:01.489 real 0m1.388s 00:07:01.489 user 0m1.269s 00:07:01.489 sys 0m0.134s 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.489 13:37:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.489 ************************************ 00:07:01.489 END TEST accel_crc32c_C2 00:07:01.489 ************************************ 00:07:01.489 13:37:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.489 13:37:27 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.489 13:37:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.489 13:37:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.489 13:37:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.490 ************************************ 00:07:01.490 START TEST accel_copy 00:07:01.490 ************************************ 00:07:01.490 13:37:27 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.490 13:37:27 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.490 [2024-07-15 13:37:27.872115] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:01.490 [2024-07-15 13:37:27.872178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362361 ] 00:07:01.490 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.490 [2024-07-15 13:37:27.941532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.749 [2024-07-15 13:37:28.026324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.749 13:37:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.126 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:03.127 13:37:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.127 00:07:03.127 real 0m1.391s 00:07:03.127 user 0m1.258s 00:07:03.127 sys 0m0.148s 00:07:03.127 13:37:29 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.127 13:37:29 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 END TEST accel_copy 00:07:03.127 ************************************ 00:07:03.127 13:37:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.127 13:37:29 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.127 13:37:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.127 13:37:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.127 13:37:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.127 ************************************ 00:07:03.127 START TEST accel_fill 00:07:03.127 ************************************ 00:07:03.127 13:37:29 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:03.127 [2024-07-15 13:37:29.347347] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:03.127 [2024-07-15 13:37:29.347411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362585 ] 00:07:03.127 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.127 [2024-07-15 13:37:29.436378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.127 [2024-07-15 13:37:29.520892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.127 13:37:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:04.505 13:37:30 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.505 00:07:04.505 real 0m1.411s 00:07:04.505 user 0m1.261s 00:07:04.505 sys 0m0.164s 00:07:04.505 13:37:30 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.505 13:37:30 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:04.505 ************************************ 00:07:04.505 END TEST accel_fill 00:07:04.505 ************************************ 00:07:04.505 13:37:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.505 13:37:30 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:04.505 13:37:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.505 13:37:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.505 13:37:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.505 ************************************ 00:07:04.505 START TEST accel_copy_crc32c 00:07:04.505 ************************************ 00:07:04.505 13:37:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:04.505 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.505 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.505 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.505 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.506 13:37:30 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.506 [2024-07-15 13:37:30.844741] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:04.506 [2024-07-15 13:37:30.844820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362842 ] 00:07:04.506 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.506 [2024-07-15 13:37:30.931902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.506 [2024-07-15 13:37:31.016548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.764 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.765 13:37:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.701 00:07:05.701 real 0m1.404s 00:07:05.701 user 0m1.261s 00:07:05.701 sys 0m0.158s 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.701 13:37:32 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:05.701 ************************************ 00:07:05.701 END TEST accel_copy_crc32c 00:07:05.701 ************************************ 00:07:05.960 13:37:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.960 13:37:32 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.960 13:37:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:05.960 13:37:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.960 13:37:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.960 ************************************ 00:07:05.960 START TEST accel_copy_crc32c_C2 00:07:05.960 ************************************ 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:05.960 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:05.960 [2024-07-15 13:37:32.329894] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:05.960 [2024-07-15 13:37:32.329964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363090 ] 00:07:05.960 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.960 [2024-07-15 13:37:32.414807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.218 [2024-07-15 13:37:32.504065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.218 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.219 13:37:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.596 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.597 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.597 13:37:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.597 00:07:07.597 real 0m1.395s 00:07:07.597 user 0m1.266s 00:07:07.597 sys 0m0.144s 00:07:07.597 13:37:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.597 13:37:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.597 ************************************ 00:07:07.597 END TEST accel_copy_crc32c_C2 00:07:07.597 ************************************ 00:07:07.597 13:37:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.597 13:37:33 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:07.597 13:37:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.597 13:37:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.597 13:37:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.597 ************************************ 00:07:07.597 START TEST accel_dualcast 00:07:07.597 ************************************ 00:07:07.597 13:37:33 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:07.597 13:37:33 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:07.597 [2024-07-15 13:37:33.803844] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:07.597 [2024-07-15 13:37:33.803906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363333 ] 00:07:07.597 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.597 [2024-07-15 13:37:33.871640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.597 [2024-07-15 13:37:33.955108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.597 13:37:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:08.977 13:37:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.977 00:07:08.977 real 0m1.385s 00:07:08.977 user 0m1.261s 00:07:08.977 sys 0m0.137s 00:07:08.977 13:37:35 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.977 13:37:35 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:08.977 ************************************ 00:07:08.977 END TEST accel_dualcast 00:07:08.977 ************************************ 00:07:08.977 13:37:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.977 13:37:35 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.977 13:37:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.977 13:37:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.977 13:37:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.977 ************************************ 00:07:08.977 START TEST accel_compare 00:07:08.977 ************************************ 00:07:08.977 13:37:35 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:08.977 [2024-07-15 13:37:35.267164] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:08.977 [2024-07-15 13:37:35.267224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363530 ] 00:07:08.977 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.977 [2024-07-15 13:37:35.354898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.977 [2024-07-15 13:37:35.439055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.977 13:37:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:10.353 13:37:36 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.353 00:07:10.353 real 0m1.407s 00:07:10.353 user 0m1.262s 00:07:10.353 sys 0m0.158s 00:07:10.353 13:37:36 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.353 13:37:36 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:10.353 ************************************ 00:07:10.353 END TEST accel_compare 00:07:10.353 ************************************ 00:07:10.353 13:37:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.353 13:37:36 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:10.353 13:37:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.353 13:37:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.353 13:37:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.353 ************************************ 00:07:10.353 START TEST accel_xor 00:07:10.353 ************************************ 00:07:10.353 13:37:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.353 13:37:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.353 [2024-07-15 13:37:36.759101] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:10.353 [2024-07-15 13:37:36.759161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363738 ] 00:07:10.353 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.353 [2024-07-15 13:37:36.845691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.611 [2024-07-15 13:37:36.929100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.611 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.612 13:37:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.065 00:07:12.065 real 0m1.407s 00:07:12.065 user 0m1.262s 00:07:12.065 sys 0m0.158s 00:07:12.065 13:37:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.065 13:37:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.065 ************************************ 00:07:12.065 END TEST accel_xor 00:07:12.065 ************************************ 00:07:12.065 13:37:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.065 13:37:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.065 13:37:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.065 13:37:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.065 13:37:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.065 ************************************ 00:07:12.065 START TEST accel_xor 00:07:12.065 ************************************ 00:07:12.065 13:37:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.065 [2024-07-15 13:37:38.250945] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:12.065 [2024-07-15 13:37:38.251011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2363936 ] 00:07:12.065 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.065 [2024-07-15 13:37:38.318609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.065 [2024-07-15 13:37:38.403919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.065 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.066 13:37:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:13.447 13:37:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.447 00:07:13.447 real 0m1.391s 00:07:13.447 user 0m1.266s 00:07:13.447 sys 0m0.137s 00:07:13.447 13:37:39 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.447 13:37:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 ************************************ 00:07:13.447 END TEST accel_xor 00:07:13.447 ************************************ 00:07:13.447 13:37:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.447 13:37:39 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:13.447 13:37:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.447 13:37:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.447 13:37:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.447 ************************************ 00:07:13.447 START TEST accel_dif_verify 00:07:13.447 ************************************ 00:07:13.447 13:37:39 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.447 [2024-07-15 13:37:39.726077] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:13.447 [2024-07-15 13:37:39.726146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364141 ] 00:07:13.447 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.447 [2024-07-15 13:37:39.815516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.447 [2024-07-15 13:37:39.909880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.447 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.707 13:37:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:14.645 13:37:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.645 00:07:14.645 real 0m1.427s 00:07:14.645 user 0m1.287s 00:07:14.645 sys 0m0.155s 00:07:14.645 13:37:41 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.645 13:37:41 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.645 ************************************ 00:07:14.645 END TEST accel_dif_verify 00:07:14.645 ************************************ 00:07:14.645 13:37:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.645 13:37:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:14.645 13:37:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.645 13:37:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.645 13:37:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.905 ************************************ 00:07:14.905 START TEST accel_dif_generate 00:07:14.905 ************************************ 00:07:14.905 13:37:41 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:14.905 13:37:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:14.905 [2024-07-15 13:37:41.231305] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:14.905 [2024-07-15 13:37:41.231370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364348 ] 00:07:14.905 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.905 [2024-07-15 13:37:41.318778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.905 [2024-07-15 13:37:41.402144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.165 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.166 13:37:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:16.104 13:37:42 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.104 00:07:16.104 real 0m1.408s 00:07:16.104 user 0m1.266s 00:07:16.104 sys 0m0.158s 00:07:16.104 13:37:42 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.104 13:37:42 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:16.104 ************************************ 00:07:16.104 END TEST accel_dif_generate 00:07:16.104 ************************************ 00:07:16.364 13:37:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.364 13:37:42 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:16.364 13:37:42 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:16.364 13:37:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.364 13:37:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.364 ************************************ 00:07:16.364 START TEST accel_dif_generate_copy 00:07:16.364 ************************************ 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:16.364 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:16.364 [2024-07-15 13:37:42.721776] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:16.364 [2024-07-15 13:37:42.721838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364548 ] 00:07:16.364 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.364 [2024-07-15 13:37:42.789005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.364 [2024-07-15 13:37:42.873033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.624 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.625 13:37:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.567 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.568 00:07:17.568 real 0m1.385s 00:07:17.568 user 0m1.263s 00:07:17.568 sys 0m0.134s 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.568 13:37:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.568 ************************************ 00:07:17.568 END TEST accel_dif_generate_copy 00:07:17.568 ************************************ 00:07:17.827 13:37:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.827 13:37:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:17.827 13:37:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.827 13:37:44 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:17.827 13:37:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.827 13:37:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.827 ************************************ 00:07:17.827 START TEST accel_comp 00:07:17.827 ************************************ 00:07:17.827 13:37:44 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:17.827 13:37:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:17.827 [2024-07-15 13:37:44.192531] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:17.827 [2024-07-15 13:37:44.192599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364754 ] 00:07:17.827 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.827 [2024-07-15 13:37:44.276092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.087 [2024-07-15 13:37:44.359810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.087 13:37:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:19.466 13:37:45 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.466 00:07:19.466 real 0m1.406s 00:07:19.466 user 0m1.267s 00:07:19.466 sys 0m0.153s 00:07:19.466 13:37:45 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.466 13:37:45 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:19.466 ************************************ 00:07:19.466 END TEST accel_comp 00:07:19.466 ************************************ 00:07:19.466 13:37:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.466 13:37:45 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.466 13:37:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.466 13:37:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.466 13:37:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.466 ************************************ 00:07:19.466 START TEST accel_decomp 00:07:19.466 ************************************ 00:07:19.466 13:37:45 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.466 13:37:45 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:19.467 [2024-07-15 13:37:45.683051] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:19.467 [2024-07-15 13:37:45.683111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364953 ] 00:07:19.467 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.467 [2024-07-15 13:37:45.768291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.467 [2024-07-15 13:37:45.854507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.467 13:37:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.844 13:37:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.844 00:07:20.844 real 0m1.412s 00:07:20.844 user 0m1.266s 00:07:20.844 sys 0m0.161s 00:07:20.844 13:37:47 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.844 13:37:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:20.844 ************************************ 00:07:20.844 END TEST accel_decomp 00:07:20.844 ************************************ 00:07:20.844 13:37:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.844 13:37:47 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.844 13:37:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:20.844 13:37:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.844 13:37:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.844 ************************************ 00:07:20.844 START TEST accel_decomp_full 00:07:20.844 ************************************ 00:07:20.844 13:37:47 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:20.844 13:37:47 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:20.844 [2024-07-15 13:37:47.177403] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:20.844 [2024-07-15 13:37:47.177465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365158 ] 00:07:20.844 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.844 [2024-07-15 13:37:47.264025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.844 [2024-07-15 13:37:47.348964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.104 13:37:47 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.041 13:37:48 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.041 00:07:22.041 real 0m1.415s 00:07:22.041 user 0m1.269s 00:07:22.041 sys 0m0.161s 00:07:22.041 13:37:48 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.041 13:37:48 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:22.041 ************************************ 00:07:22.041 END TEST accel_decomp_full 00:07:22.041 ************************************ 00:07:22.300 13:37:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.300 13:37:48 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.300 13:37:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.300 13:37:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.300 13:37:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.300 ************************************ 00:07:22.300 START TEST accel_decomp_mcore 00:07:22.300 ************************************ 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.300 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.300 [2024-07-15 13:37:48.674194] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:22.300 [2024-07-15 13:37:48.674267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365366 ] 00:07:22.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.300 [2024-07-15 13:37:48.762056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.559 [2024-07-15 13:37:48.846521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.559 [2024-07-15 13:37:48.846672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.559 [2024-07-15 13:37:48.846673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.559 [2024-07-15 13:37:48.846630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.559 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.560 13:37:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.937 00:07:23.937 real 0m1.413s 00:07:23.937 user 0m4.622s 00:07:23.937 sys 0m0.165s 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.937 13:37:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:23.937 ************************************ 00:07:23.937 END TEST accel_decomp_mcore 00:07:23.937 ************************************ 00:07:23.937 13:37:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.937 13:37:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.937 13:37:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.937 13:37:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.937 13:37:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.937 ************************************ 00:07:23.937 START TEST accel_decomp_full_mcore 00:07:23.937 ************************************ 00:07:23.937 13:37:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.937 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:23.937 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:23.937 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.937 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:23.938 [2024-07-15 13:37:50.166934] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:23.938 [2024-07-15 13:37:50.167005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365617 ] 00:07:23.938 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.938 [2024-07-15 13:37:50.254229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.938 [2024-07-15 13:37:50.341459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.938 [2024-07-15 13:37:50.341561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.938 [2024-07-15 13:37:50.341667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.938 [2024-07-15 13:37:50.341667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.938 13:37:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.318 00:07:25.318 real 0m1.414s 00:07:25.318 user 0m4.626s 00:07:25.318 sys 0m0.169s 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.318 13:37:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.318 ************************************ 00:07:25.318 END TEST accel_decomp_full_mcore 00:07:25.318 ************************************ 00:07:25.318 13:37:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.318 13:37:51 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.318 13:37:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.318 13:37:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.318 13:37:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.318 ************************************ 00:07:25.318 START TEST accel_decomp_mthread 00:07:25.318 ************************************ 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.319 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.319 [2024-07-15 13:37:51.665315] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:25.319 [2024-07-15 13:37:51.665375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365874 ] 00:07:25.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.319 [2024-07-15 13:37:51.733359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.319 [2024-07-15 13:37:51.817178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.579 13:37:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.517 00:07:26.517 real 0m1.396s 00:07:26.517 user 0m1.264s 00:07:26.517 sys 0m0.149s 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.517 13:37:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:26.517 ************************************ 00:07:26.517 END TEST accel_decomp_mthread 00:07:26.517 ************************************ 00:07:26.777 13:37:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.777 13:37:53 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.777 13:37:53 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:26.777 13:37:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.777 13:37:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.777 ************************************ 00:07:26.777 START TEST accel_decomp_full_mthread 00:07:26.777 ************************************ 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:26.777 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:26.777 [2024-07-15 13:37:53.140422] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:26.777 [2024-07-15 13:37:53.140481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366116 ] 00:07:26.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.777 [2024-07-15 13:37:53.226917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.037 [2024-07-15 13:37:53.310273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.037 13:37:53 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.417 00:07:28.417 real 0m1.427s 00:07:28.417 user 0m1.296s 00:07:28.417 sys 0m0.146s 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.417 13:37:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 ************************************ 00:07:28.417 END TEST accel_decomp_full_mthread 00:07:28.417 ************************************ 00:07:28.417 13:37:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.417 13:37:54 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:28.417 13:37:54 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.417 13:37:54 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:28.417 13:37:54 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.417 13:37:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.417 13:37:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.417 13:37:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 13:37:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.417 13:37:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.417 13:37:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.417 13:37:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.417 13:37:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:28.417 13:37:54 accel -- accel/accel.sh@41 -- # jq -r . 00:07:28.417 ************************************ 00:07:28.417 START TEST accel_dif_functional_tests 00:07:28.417 ************************************ 00:07:28.417 13:37:54 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.417 [2024-07-15 13:37:54.668744] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.417 [2024-07-15 13:37:54.668787] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366341 ] 00:07:28.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.417 [2024-07-15 13:37:54.733529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.417 [2024-07-15 13:37:54.818336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.417 [2024-07-15 13:37:54.818439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.417 [2024-07-15 13:37:54.818439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.417 00:07:28.417 00:07:28.417 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.417 http://cunit.sourceforge.net/ 00:07:28.417 00:07:28.417 00:07:28.417 Suite: accel_dif 00:07:28.417 Test: verify: DIF generated, GUARD check ...passed 00:07:28.417 Test: verify: DIF generated, APPTAG check ...passed 00:07:28.418 Test: verify: DIF generated, REFTAG check ...passed 00:07:28.418 Test: verify: DIF not generated, GUARD check ...[2024-07-15 13:37:54.897841] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.418 passed 00:07:28.418 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:37:54.897892] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.418 passed 00:07:28.418 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:37:54.897916] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.418 passed 00:07:28.418 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:28.418 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:37:54.897970] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:28.418 passed 00:07:28.418 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:28.418 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:28.418 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:28.418 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:37:54.898079] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:28.418 passed 00:07:28.418 Test: verify copy: DIF generated, GUARD check ...passed 00:07:28.418 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:28.418 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:28.418 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:37:54.898191] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.418 passed 00:07:28.418 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:37:54.898216] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.418 passed 00:07:28.418 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:37:54.898239] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.418 passed 00:07:28.418 Test: generate copy: DIF generated, GUARD check ...passed 00:07:28.418 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:28.418 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:28.418 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:28.418 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:28.418 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:28.418 Test: generate copy: iovecs-len validate ...[2024-07-15 13:37:54.898414] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:28.418 passed 00:07:28.418 Test: generate copy: buffer alignment validate ...passed 00:07:28.418 00:07:28.418 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.418 suites 1 1 n/a 0 0 00:07:28.418 tests 26 26 26 0 0 00:07:28.418 asserts 115 115 115 0 n/a 00:07:28.418 00:07:28.418 Elapsed time = 0.000 seconds 00:07:28.677 00:07:28.677 real 0m0.467s 00:07:28.677 user 0m0.667s 00:07:28.677 sys 0m0.164s 00:07:28.677 13:37:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.677 13:37:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:28.677 ************************************ 00:07:28.677 END TEST accel_dif_functional_tests 00:07:28.677 ************************************ 00:07:28.677 13:37:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.677 00:07:28.677 real 0m33.041s 00:07:28.677 user 0m35.681s 00:07:28.677 sys 0m5.560s 00:07:28.677 13:37:55 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.677 13:37:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.677 ************************************ 00:07:28.677 END TEST accel 00:07:28.677 ************************************ 00:07:28.677 13:37:55 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.677 13:37:55 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:28.677 13:37:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.677 13:37:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.677 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:28.937 ************************************ 00:07:28.937 START TEST accel_rpc 00:07:28.937 ************************************ 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:28.937 * Looking for test storage... 00:07:28.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:28.937 13:37:55 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.937 13:37:55 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2366416 00:07:28.937 13:37:55 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2366416 00:07:28.937 13:37:55 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2366416 ']' 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.937 13:37:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.937 [2024-07-15 13:37:55.376971] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:28.937 [2024-07-15 13:37:55.377034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366416 ] 00:07:28.937 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.937 [2024-07-15 13:37:55.460963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.196 [2024-07-15 13:37:55.541119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.765 13:37:56 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.765 13:37:56 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.765 13:37:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:29.765 13:37:56 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:29.765 13:37:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:29.765 13:37:56 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:29.765 13:37:56 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:29.765 13:37:56 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.765 13:37:56 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.765 13:37:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.765 ************************************ 00:07:29.765 START TEST accel_assign_opcode 00:07:29.765 ************************************ 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.765 [2024-07-15 13:37:56.235204] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.765 [2024-07-15 13:37:56.247228] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.765 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:30.024 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.024 software 00:07:30.024 00:07:30.025 real 0m0.260s 00:07:30.025 user 0m0.047s 00:07:30.025 sys 0m0.015s 00:07:30.025 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.025 13:37:56 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.025 ************************************ 00:07:30.025 END TEST accel_assign_opcode 00:07:30.025 ************************************ 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:30.025 13:37:56 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2366416 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2366416 ']' 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2366416 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.025 13:37:56 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366416 00:07:30.284 13:37:56 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.284 13:37:56 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.284 13:37:56 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366416' 00:07:30.284 killing process with pid 2366416 00:07:30.284 13:37:56 accel_rpc -- common/autotest_common.sh@967 -- # kill 2366416 00:07:30.284 13:37:56 accel_rpc -- common/autotest_common.sh@972 -- # wait 2366416 00:07:30.543 00:07:30.543 real 0m1.722s 00:07:30.543 user 0m1.738s 00:07:30.543 sys 0m0.530s 00:07:30.543 13:37:56 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.543 13:37:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.543 ************************************ 00:07:30.543 END TEST accel_rpc 00:07:30.543 ************************************ 00:07:30.543 13:37:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.543 13:37:56 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.543 13:37:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.543 13:37:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.543 13:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:30.543 ************************************ 00:07:30.543 START TEST app_cmdline 00:07:30.544 ************************************ 00:07:30.544 13:37:57 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.803 * Looking for test storage... 00:07:30.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:30.803 13:37:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:30.803 13:37:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2366735 00:07:30.803 13:37:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2366735 00:07:30.803 13:37:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2366735 ']' 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.803 13:37:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.803 [2024-07-15 13:37:57.189419] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:07:30.803 [2024-07-15 13:37:57.189496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366735 ] 00:07:30.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.803 [2024-07-15 13:37:57.276391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.062 [2024-07-15 13:37:57.367652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.631 13:37:57 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.631 13:37:57 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:31.631 13:37:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:31.631 { 00:07:31.631 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:07:31.631 "fields": { 00:07:31.631 "major": 24, 00:07:31.631 "minor": 9, 00:07:31.631 "patch": 0, 00:07:31.631 "suffix": "-pre", 00:07:31.631 "commit": "2728651ee" 00:07:31.631 } 00:07:31.631 } 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.891 request: 00:07:31.891 { 00:07:31.891 "method": "env_dpdk_get_mem_stats", 00:07:31.891 "req_id": 1 00:07:31.891 } 00:07:31.891 Got JSON-RPC error response 00:07:31.891 response: 00:07:31.891 { 00:07:31.891 "code": -32601, 00:07:31.891 "message": "Method not found" 00:07:31.891 } 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.891 13:37:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2366735 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2366735 ']' 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2366735 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.891 13:37:58 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2366735 00:07:32.150 13:37:58 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.150 13:37:58 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.150 13:37:58 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2366735' 00:07:32.150 killing process with pid 2366735 00:07:32.150 13:37:58 app_cmdline -- common/autotest_common.sh@967 -- # kill 2366735 00:07:32.150 13:37:58 app_cmdline -- common/autotest_common.sh@972 -- # wait 2366735 00:07:32.410 00:07:32.410 real 0m1.776s 00:07:32.410 user 0m2.031s 00:07:32.410 sys 0m0.539s 00:07:32.410 13:37:58 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.410 13:37:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.410 ************************************ 00:07:32.410 END TEST app_cmdline 00:07:32.410 ************************************ 00:07:32.410 13:37:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.410 13:37:58 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:32.410 13:37:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.410 13:37:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.410 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:32.410 ************************************ 00:07:32.410 START TEST version 00:07:32.410 ************************************ 00:07:32.410 13:37:58 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:32.670 * Looking for test storage... 00:07:32.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:32.670 13:37:58 version -- app/version.sh@17 -- # get_header_version major 00:07:32.670 13:37:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:32.670 13:37:58 version -- app/version.sh@14 -- # cut -f2 00:07:32.670 13:37:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.670 13:37:58 version -- app/version.sh@17 -- # major=24 00:07:32.670 13:37:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:32.670 13:37:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:32.670 13:37:58 version -- app/version.sh@14 -- # cut -f2 00:07:32.670 13:37:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.670 13:37:58 version -- app/version.sh@18 -- # minor=9 00:07:32.670 13:37:59 version -- app/version.sh@19 -- # get_header_version patch 00:07:32.670 13:37:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:32.670 13:37:59 version -- app/version.sh@14 -- # cut -f2 00:07:32.670 13:37:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.670 13:37:59 version -- app/version.sh@19 -- # patch=0 00:07:32.670 13:37:59 version -- app/version.sh@20 -- # get_header_version suffix 00:07:32.670 13:37:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:32.670 13:37:59 version -- app/version.sh@14 -- # cut -f2 00:07:32.670 13:37:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.670 13:37:59 version -- app/version.sh@20 -- # suffix=-pre 00:07:32.670 13:37:59 version -- app/version.sh@22 -- # version=24.9 00:07:32.670 13:37:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:32.670 13:37:59 version -- app/version.sh@28 -- # version=24.9rc0 00:07:32.670 13:37:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:32.670 13:37:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:32.670 13:37:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:32.670 13:37:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:32.670 00:07:32.670 real 0m0.183s 00:07:32.670 user 0m0.087s 00:07:32.670 sys 0m0.145s 00:07:32.670 13:37:59 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.670 13:37:59 version -- common/autotest_common.sh@10 -- # set +x 00:07:32.670 ************************************ 00:07:32.670 END TEST version 00:07:32.670 ************************************ 00:07:32.670 13:37:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.670 13:37:59 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@198 -- # uname -s 00:07:32.670 13:37:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:32.670 13:37:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.670 13:37:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.670 13:37:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:32.670 13:37:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.670 13:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:32.670 13:37:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:32.670 13:37:59 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:32.670 13:37:59 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:32.670 13:37:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.670 13:37:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.670 13:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:32.931 ************************************ 00:07:32.931 START TEST nvmf_rdma 00:07:32.931 ************************************ 00:07:32.931 13:37:59 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:32.931 * Looking for test storage... 00:07:32.931 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.931 13:37:59 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:32.932 13:37:59 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.932 13:37:59 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.932 13:37:59 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.932 13:37:59 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.932 13:37:59 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.932 13:37:59 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.932 13:37:59 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:32.932 13:37:59 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:32.932 13:37:59 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.932 13:37:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:32.932 13:37:59 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:32.932 13:37:59 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.932 13:37:59 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.932 13:37:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:32.932 ************************************ 00:07:32.932 START TEST nvmf_example 00:07:32.932 ************************************ 00:07:32.932 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:33.275 * Looking for test storage... 00:07:33.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.275 13:37:59 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.276 13:37:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:39.845 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:39.845 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:39.845 Found net devices under 0000:18:00.0: mlx_0_0 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.845 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:39.846 Found net devices under 0000:18:00.1: mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:39.846 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:39.846 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:07:39.846 altname enp24s0f0np0 00:07:39.846 altname ens785f0np0 00:07:39.846 inet 192.168.100.8/24 scope global mlx_0_0 00:07:39.846 valid_lft forever preferred_lft forever 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:39.846 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:39.846 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:07:39.846 altname enp24s0f1np1 00:07:39.846 altname ens785f1np1 00:07:39.846 inet 192.168.100.9/24 scope global mlx_0_1 00:07:39.846 valid_lft forever preferred_lft forever 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:39.846 192.168.100.9' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:39.846 192.168.100.9' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:39.846 192.168.100.9' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2370054 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2370054 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2370054 ']' 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.846 13:38:06 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.106 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:41.043 13:38:07 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:41.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.520 Initializing NVMe Controllers 00:07:53.520 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:53.520 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:53.520 Initialization complete. Launching workers. 00:07:53.520 ======================================================== 00:07:53.520 Latency(us) 00:07:53.520 Device Information : IOPS MiB/s Average min max 00:07:53.520 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24887.30 97.22 2572.88 637.45 13080.17 00:07:53.520 ======================================================== 00:07:53.520 Total : 24887.30 97.22 2572.88 637.45 13080.17 00:07:53.520 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:53.520 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:53.521 rmmod nvme_rdma 00:07:53.521 rmmod nvme_fabrics 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2370054 ']' 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2370054 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2370054 ']' 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2370054 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2370054 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2370054' 00:07:53.521 killing process with pid 2370054 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # kill 2370054 00:07:53.521 13:38:18 nvmf_rdma.nvmf_example -- common/autotest_common.sh@972 -- # wait 2370054 00:07:53.521 nvmf threads initialize successfully 00:07:53.521 bdev subsystem init successfully 00:07:53.521 created a nvmf target service 00:07:53.521 create targets's poll groups done 00:07:53.521 all subsystems of target started 00:07:53.521 nvmf target is running 00:07:53.521 all subsystems of target stopped 00:07:53.521 destroy targets's poll groups done 00:07:53.521 destroyed the nvmf target service 00:07:53.521 bdev subsystem finish successfully 00:07:53.521 nvmf threads destroy successfully 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.521 00:07:53.521 real 0m19.834s 00:07:53.521 user 0m52.449s 00:07:53.521 sys 0m5.696s 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.521 13:38:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:53.521 ************************************ 00:07:53.521 END TEST nvmf_example 00:07:53.521 ************************************ 00:07:53.521 13:38:19 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:07:53.521 13:38:19 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:53.521 13:38:19 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.521 13:38:19 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.521 13:38:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:53.521 ************************************ 00:07:53.521 START TEST nvmf_filesystem 00:07:53.521 ************************************ 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:53.521 * Looking for test storage... 00:07:53.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:53.521 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:53.522 #define SPDK_CONFIG_H 00:07:53.522 #define SPDK_CONFIG_APPS 1 00:07:53.522 #define SPDK_CONFIG_ARCH native 00:07:53.522 #undef SPDK_CONFIG_ASAN 00:07:53.522 #undef SPDK_CONFIG_AVAHI 00:07:53.522 #undef SPDK_CONFIG_CET 00:07:53.522 #define SPDK_CONFIG_COVERAGE 1 00:07:53.522 #define SPDK_CONFIG_CROSS_PREFIX 00:07:53.522 #undef SPDK_CONFIG_CRYPTO 00:07:53.522 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:53.522 #undef SPDK_CONFIG_CUSTOMOCF 00:07:53.522 #undef SPDK_CONFIG_DAOS 00:07:53.522 #define SPDK_CONFIG_DAOS_DIR 00:07:53.522 #define SPDK_CONFIG_DEBUG 1 00:07:53.522 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:53.522 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:53.522 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:53.522 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:53.522 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:53.522 #undef SPDK_CONFIG_DPDK_UADK 00:07:53.522 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:53.522 #define SPDK_CONFIG_EXAMPLES 1 00:07:53.522 #undef SPDK_CONFIG_FC 00:07:53.522 #define SPDK_CONFIG_FC_PATH 00:07:53.522 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:53.522 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:53.522 #undef SPDK_CONFIG_FUSE 00:07:53.522 #undef SPDK_CONFIG_FUZZER 00:07:53.522 #define SPDK_CONFIG_FUZZER_LIB 00:07:53.522 #undef SPDK_CONFIG_GOLANG 00:07:53.522 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:53.522 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:53.522 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:53.522 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:53.522 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:53.522 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:53.522 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:53.522 #define SPDK_CONFIG_IDXD 1 00:07:53.522 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:53.522 #undef SPDK_CONFIG_IPSEC_MB 00:07:53.522 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:53.522 #define SPDK_CONFIG_ISAL 1 00:07:53.522 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:53.522 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:53.522 #define SPDK_CONFIG_LIBDIR 00:07:53.522 #undef SPDK_CONFIG_LTO 00:07:53.522 #define SPDK_CONFIG_MAX_LCORES 128 00:07:53.522 #define SPDK_CONFIG_NVME_CUSE 1 00:07:53.522 #undef SPDK_CONFIG_OCF 00:07:53.522 #define SPDK_CONFIG_OCF_PATH 00:07:53.522 #define SPDK_CONFIG_OPENSSL_PATH 00:07:53.522 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:53.522 #define SPDK_CONFIG_PGO_DIR 00:07:53.522 #undef SPDK_CONFIG_PGO_USE 00:07:53.522 #define SPDK_CONFIG_PREFIX /usr/local 00:07:53.522 #undef SPDK_CONFIG_RAID5F 00:07:53.522 #undef SPDK_CONFIG_RBD 00:07:53.522 #define SPDK_CONFIG_RDMA 1 00:07:53.522 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:53.522 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:53.522 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:53.522 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:53.522 #define SPDK_CONFIG_SHARED 1 00:07:53.522 #undef SPDK_CONFIG_SMA 00:07:53.522 #define SPDK_CONFIG_TESTS 1 00:07:53.522 #undef SPDK_CONFIG_TSAN 00:07:53.522 #define SPDK_CONFIG_UBLK 1 00:07:53.522 #define SPDK_CONFIG_UBSAN 1 00:07:53.522 #undef SPDK_CONFIG_UNIT_TESTS 00:07:53.522 #undef SPDK_CONFIG_URING 00:07:53.522 #define SPDK_CONFIG_URING_PATH 00:07:53.522 #undef SPDK_CONFIG_URING_ZNS 00:07:53.522 #undef SPDK_CONFIG_USDT 00:07:53.522 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:53.522 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:53.522 #undef SPDK_CONFIG_VFIO_USER 00:07:53.522 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:53.522 #define SPDK_CONFIG_VHOST 1 00:07:53.522 #define SPDK_CONFIG_VIRTIO 1 00:07:53.522 #undef SPDK_CONFIG_VTUNE 00:07:53.522 #define SPDK_CONFIG_VTUNE_DIR 00:07:53.522 #define SPDK_CONFIG_WERROR 1 00:07:53.522 #define SPDK_CONFIG_WPDK_DIR 00:07:53.522 #undef SPDK_CONFIG_XNVME 00:07:53.522 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:53.522 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.523 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j72 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2371834 ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2371834 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.zrMIDN 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.zrMIDN/tests/target /tmp/spdk.zrMIDN 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=951971840 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332457984 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=84820606976 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94508572672 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9687965696 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249575936 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.524 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18892550144 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901716992 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9166848 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47253794816 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47254286336 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=491520 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450852352 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450856448 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:53.525 * Looking for test storage... 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=84820606976 00:07:53.525 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11902558208 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.526 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:53.526 13:38:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.100 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:00.101 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:00.101 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:00.101 Found net devices under 0000:18:00.0: mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:00.101 Found net devices under 0000:18:00.1: mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:00.101 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.101 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:00.101 altname enp24s0f0np0 00:08:00.101 altname ens785f0np0 00:08:00.101 inet 192.168.100.8/24 scope global mlx_0_0 00:08:00.101 valid_lft forever preferred_lft forever 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:00.101 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.101 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:00.101 altname enp24s0f1np1 00:08:00.101 altname ens785f1np1 00:08:00.101 inet 192.168.100.9/24 scope global mlx_0_1 00:08:00.101 valid_lft forever preferred_lft forever 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:00.101 192.168.100.9' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:00.101 192.168.100.9' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:00.101 192.168.100.9' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.101 ************************************ 00:08:00.101 START TEST nvmf_filesystem_no_in_capsule 00:08:00.101 ************************************ 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2374790 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2374790 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2374790 ']' 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.101 13:38:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.101 [2024-07-15 13:38:26.577795] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:00.101 [2024-07-15 13:38:26.577844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.101 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.360 [2024-07-15 13:38:26.664132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.360 [2024-07-15 13:38:26.749085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.360 [2024-07-15 13:38:26.749131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.360 [2024-07-15 13:38:26.749140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.360 [2024-07-15 13:38:26.749148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.360 [2024-07-15 13:38:26.749155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.360 [2024-07-15 13:38:26.749269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.360 [2024-07-15 13:38:26.749387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.360 [2024-07-15 13:38:26.749489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.360 [2024-07-15 13:38:26.749491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.926 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.926 [2024-07-15 13:38:27.448531] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:01.184 [2024-07-15 13:38:27.469294] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x199e180/0x19a2670) succeed. 00:08:01.184 [2024-07-15 13:38:27.478706] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x199f7c0/0x19e3d00) succeed. 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.184 Malloc1 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.184 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.443 [2024-07-15 13:38:27.735137] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:01.443 { 00:08:01.443 "name": "Malloc1", 00:08:01.443 "aliases": [ 00:08:01.443 "094f0bad-e09d-4811-8eff-1a3d7f6300db" 00:08:01.443 ], 00:08:01.443 "product_name": "Malloc disk", 00:08:01.443 "block_size": 512, 00:08:01.443 "num_blocks": 1048576, 00:08:01.443 "uuid": "094f0bad-e09d-4811-8eff-1a3d7f6300db", 00:08:01.443 "assigned_rate_limits": { 00:08:01.443 "rw_ios_per_sec": 0, 00:08:01.443 "rw_mbytes_per_sec": 0, 00:08:01.443 "r_mbytes_per_sec": 0, 00:08:01.443 "w_mbytes_per_sec": 0 00:08:01.443 }, 00:08:01.443 "claimed": true, 00:08:01.443 "claim_type": "exclusive_write", 00:08:01.443 "zoned": false, 00:08:01.443 "supported_io_types": { 00:08:01.443 "read": true, 00:08:01.443 "write": true, 00:08:01.443 "unmap": true, 00:08:01.443 "flush": true, 00:08:01.443 "reset": true, 00:08:01.443 "nvme_admin": false, 00:08:01.443 "nvme_io": false, 00:08:01.443 "nvme_io_md": false, 00:08:01.443 "write_zeroes": true, 00:08:01.443 "zcopy": true, 00:08:01.443 "get_zone_info": false, 00:08:01.443 "zone_management": false, 00:08:01.443 "zone_append": false, 00:08:01.443 "compare": false, 00:08:01.443 "compare_and_write": false, 00:08:01.443 "abort": true, 00:08:01.443 "seek_hole": false, 00:08:01.443 "seek_data": false, 00:08:01.443 "copy": true, 00:08:01.443 "nvme_iov_md": false 00:08:01.443 }, 00:08:01.443 "memory_domains": [ 00:08:01.443 { 00:08:01.443 "dma_device_id": "system", 00:08:01.443 "dma_device_type": 1 00:08:01.443 }, 00:08:01.443 { 00:08:01.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.443 "dma_device_type": 2 00:08:01.443 } 00:08:01.443 ], 00:08:01.443 "driver_specific": {} 00:08:01.443 } 00:08:01.443 ]' 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:01.443 13:38:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:02.379 13:38:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.379 13:38:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.379 13:38:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.379 13:38:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.379 13:38:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.913 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:04.914 13:38:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:04.914 13:38:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.851 ************************************ 00:08:05.851 START TEST filesystem_ext4 00:08:05.851 ************************************ 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:05.851 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.851 Discarding device blocks: 0/522240 done 00:08:05.851 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.851 Filesystem UUID: e242fd4e-0b33-4b0c-b5bf-b2fb47dc4102 00:08:05.851 Superblock backups stored on blocks: 00:08:05.851 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.851 00:08:05.851 Allocating group tables: 0/64 done 00:08:05.851 Writing inode tables: 0/64 done 00:08:05.851 Creating journal (8192 blocks): done 00:08:05.851 Writing superblocks and filesystem accounting information: 0/64 done 00:08:05.851 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2374790 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.851 00:08:05.851 real 0m0.202s 00:08:05.851 user 0m0.024s 00:08:05.851 sys 0m0.079s 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:05.851 ************************************ 00:08:05.851 END TEST filesystem_ext4 00:08:05.851 ************************************ 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.851 ************************************ 00:08:05.851 START TEST filesystem_btrfs 00:08:05.851 ************************************ 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:05.851 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:06.110 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:06.110 btrfs-progs v6.6.2 00:08:06.110 See https://btrfs.readthedocs.io for more information. 00:08:06.110 00:08:06.110 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:06.110 NOTE: several default settings have changed in version 5.15, please make sure 00:08:06.110 this does not affect your deployments: 00:08:06.110 - DUP for metadata (-m dup) 00:08:06.110 - enabled no-holes (-O no-holes) 00:08:06.110 - enabled free-space-tree (-R free-space-tree) 00:08:06.110 00:08:06.110 Label: (null) 00:08:06.110 UUID: 02736113-7bfa-498f-8825-96a492123b17 00:08:06.110 Node size: 16384 00:08:06.110 Sector size: 4096 00:08:06.110 Filesystem size: 510.00MiB 00:08:06.110 Block group profiles: 00:08:06.110 Data: single 8.00MiB 00:08:06.110 Metadata: DUP 32.00MiB 00:08:06.110 System: DUP 8.00MiB 00:08:06.110 SSD detected: yes 00:08:06.110 Zoned device: no 00:08:06.111 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:06.111 Runtime features: free-space-tree 00:08:06.111 Checksum: crc32c 00:08:06.111 Number of devices: 1 00:08:06.111 Devices: 00:08:06.111 ID SIZE PATH 00:08:06.111 1 510.00MiB /dev/nvme0n1p1 00:08:06.111 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2374790 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.111 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.370 00:08:06.370 real 0m0.273s 00:08:06.370 user 0m0.027s 00:08:06.370 sys 0m0.139s 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.370 ************************************ 00:08:06.370 END TEST filesystem_btrfs 00:08:06.370 ************************************ 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:06.370 ************************************ 00:08:06.370 START TEST filesystem_xfs 00:08:06.370 ************************************ 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:06.370 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.370 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.370 = sectsz=512 attr=2, projid32bit=1 00:08:06.370 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.370 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.371 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.371 = sunit=0 swidth=0 blks 00:08:06.371 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.371 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.371 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.371 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:06.371 Discarding blocks...Done. 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.371 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2374790 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.630 00:08:06.630 real 0m0.209s 00:08:06.630 user 0m0.028s 00:08:06.630 sys 0m0.075s 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:06.630 ************************************ 00:08:06.630 END TEST filesystem_xfs 00:08:06.630 ************************************ 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:06.630 13:38:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:06.630 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:06.630 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.604 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.604 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:07.604 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.604 13:38:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2374790 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2374790 ']' 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2374790 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2374790 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2374790' 00:08:07.604 killing process with pid 2374790 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2374790 00:08:07.604 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2374790 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.173 00:08:08.173 real 0m8.001s 00:08:08.173 user 0m31.015s 00:08:08.173 sys 0m1.280s 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.173 ************************************ 00:08:08.173 END TEST nvmf_filesystem_no_in_capsule 00:08:08.173 ************************************ 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.173 ************************************ 00:08:08.173 START TEST nvmf_filesystem_in_capsule 00:08:08.173 ************************************ 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2375973 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2375973 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2375973 ']' 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.173 13:38:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.173 [2024-07-15 13:38:34.671750] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:08.173 [2024-07-15 13:38:34.671804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.431 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.431 [2024-07-15 13:38:34.762741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.431 [2024-07-15 13:38:34.854679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.431 [2024-07-15 13:38:34.854727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.431 [2024-07-15 13:38:34.854736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.431 [2024-07-15 13:38:34.854761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.431 [2024-07-15 13:38:34.854768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.431 [2024-07-15 13:38:34.854838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.431 [2024-07-15 13:38:34.854947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.431 [2024-07-15 13:38:34.855052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.431 [2024-07-15 13:38:34.855053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.997 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.997 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:08.997 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.997 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.997 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.255 [2024-07-15 13:38:35.565640] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179f180/0x17a3670) succeed. 00:08:09.255 [2024-07-15 13:38:35.575159] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17a07c0/0x17e4d00) succeed. 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.255 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.513 Malloc1 00:08:09.513 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.513 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 [2024-07-15 13:38:35.868276] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:09.514 { 00:08:09.514 "name": "Malloc1", 00:08:09.514 "aliases": [ 00:08:09.514 "f3c34502-ac6d-40f3-a57f-c19eef9d3789" 00:08:09.514 ], 00:08:09.514 "product_name": "Malloc disk", 00:08:09.514 "block_size": 512, 00:08:09.514 "num_blocks": 1048576, 00:08:09.514 "uuid": "f3c34502-ac6d-40f3-a57f-c19eef9d3789", 00:08:09.514 "assigned_rate_limits": { 00:08:09.514 "rw_ios_per_sec": 0, 00:08:09.514 "rw_mbytes_per_sec": 0, 00:08:09.514 "r_mbytes_per_sec": 0, 00:08:09.514 "w_mbytes_per_sec": 0 00:08:09.514 }, 00:08:09.514 "claimed": true, 00:08:09.514 "claim_type": "exclusive_write", 00:08:09.514 "zoned": false, 00:08:09.514 "supported_io_types": { 00:08:09.514 "read": true, 00:08:09.514 "write": true, 00:08:09.514 "unmap": true, 00:08:09.514 "flush": true, 00:08:09.514 "reset": true, 00:08:09.514 "nvme_admin": false, 00:08:09.514 "nvme_io": false, 00:08:09.514 "nvme_io_md": false, 00:08:09.514 "write_zeroes": true, 00:08:09.514 "zcopy": true, 00:08:09.514 "get_zone_info": false, 00:08:09.514 "zone_management": false, 00:08:09.514 "zone_append": false, 00:08:09.514 "compare": false, 00:08:09.514 "compare_and_write": false, 00:08:09.514 "abort": true, 00:08:09.514 "seek_hole": false, 00:08:09.514 "seek_data": false, 00:08:09.514 "copy": true, 00:08:09.514 "nvme_iov_md": false 00:08:09.514 }, 00:08:09.514 "memory_domains": [ 00:08:09.514 { 00:08:09.514 "dma_device_id": "system", 00:08:09.514 "dma_device_type": 1 00:08:09.514 }, 00:08:09.514 { 00:08:09.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.514 "dma_device_type": 2 00:08:09.514 } 00:08:09.514 ], 00:08:09.514 "driver_specific": {} 00:08:09.514 } 00:08:09.514 ]' 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:09.514 13:38:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:10.479 13:38:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.479 13:38:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:10.479 13:38:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.479 13:38:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:10.479 13:38:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.035 13:38:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.035 13:38:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.975 ************************************ 00:08:13.975 START TEST filesystem_in_capsule_ext4 00:08:13.975 ************************************ 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:13.975 mke2fs 1.46.5 (30-Dec-2021) 00:08:13.975 Discarding device blocks: 0/522240 done 00:08:13.975 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:13.975 Filesystem UUID: a4988947-0d5a-40dd-9808-b66dd1aca517 00:08:13.975 Superblock backups stored on blocks: 00:08:13.975 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:13.975 00:08:13.975 Allocating group tables: 0/64 done 00:08:13.975 Writing inode tables: 0/64 done 00:08:13.975 Creating journal (8192 blocks): done 00:08:13.975 Writing superblocks and filesystem accounting information: 0/64 done 00:08:13.975 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2375973 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.975 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.976 00:08:13.976 real 0m0.194s 00:08:13.976 user 0m0.030s 00:08:13.976 sys 0m0.068s 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:13.976 ************************************ 00:08:13.976 END TEST filesystem_in_capsule_ext4 00:08:13.976 ************************************ 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.976 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.234 ************************************ 00:08:14.234 START TEST filesystem_in_capsule_btrfs 00:08:14.234 ************************************ 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:14.235 btrfs-progs v6.6.2 00:08:14.235 See https://btrfs.readthedocs.io for more information. 00:08:14.235 00:08:14.235 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:14.235 NOTE: several default settings have changed in version 5.15, please make sure 00:08:14.235 this does not affect your deployments: 00:08:14.235 - DUP for metadata (-m dup) 00:08:14.235 - enabled no-holes (-O no-holes) 00:08:14.235 - enabled free-space-tree (-R free-space-tree) 00:08:14.235 00:08:14.235 Label: (null) 00:08:14.235 UUID: 0933428f-0034-4e19-85b6-f0ebef748b40 00:08:14.235 Node size: 16384 00:08:14.235 Sector size: 4096 00:08:14.235 Filesystem size: 510.00MiB 00:08:14.235 Block group profiles: 00:08:14.235 Data: single 8.00MiB 00:08:14.235 Metadata: DUP 32.00MiB 00:08:14.235 System: DUP 8.00MiB 00:08:14.235 SSD detected: yes 00:08:14.235 Zoned device: no 00:08:14.235 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:14.235 Runtime features: free-space-tree 00:08:14.235 Checksum: crc32c 00:08:14.235 Number of devices: 1 00:08:14.235 Devices: 00:08:14.235 ID SIZE PATH 00:08:14.235 1 510.00MiB /dev/nvme0n1p1 00:08:14.235 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2375973 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.235 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.494 00:08:14.494 real 0m0.259s 00:08:14.494 user 0m0.024s 00:08:14.494 sys 0m0.137s 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:14.494 ************************************ 00:08:14.494 END TEST filesystem_in_capsule_btrfs 00:08:14.494 ************************************ 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.494 ************************************ 00:08:14.494 START TEST filesystem_in_capsule_xfs 00:08:14.494 ************************************ 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:14.494 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:14.494 = sectsz=512 attr=2, projid32bit=1 00:08:14.494 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:14.494 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:14.494 data = bsize=4096 blocks=130560, imaxpct=25 00:08:14.494 = sunit=0 swidth=0 blks 00:08:14.494 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:14.494 log =internal log bsize=4096 blocks=16384, version=2 00:08:14.494 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:14.494 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:14.494 Discarding blocks...Done. 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:14.494 13:38:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.494 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2375973 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.754 00:08:14.754 real 0m0.220s 00:08:14.754 user 0m0.026s 00:08:14.754 sys 0m0.081s 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:14.754 ************************************ 00:08:14.754 END TEST filesystem_in_capsule_xfs 00:08:14.754 ************************************ 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:14.754 13:38:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2375973 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2375973 ']' 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2375973 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:15.689 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.690 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2375973 00:08:15.948 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.948 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.948 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2375973' 00:08:15.948 killing process with pid 2375973 00:08:15.948 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2375973 00:08:15.948 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2375973 00:08:16.206 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.206 00:08:16.206 real 0m8.108s 00:08:16.206 user 0m31.322s 00:08:16.206 sys 0m1.335s 00:08:16.206 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.206 13:38:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.206 ************************************ 00:08:16.206 END TEST nvmf_filesystem_in_capsule 00:08:16.206 ************************************ 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:16.465 rmmod nvme_rdma 00:08:16.465 rmmod nvme_fabrics 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:16.465 00:08:16.465 real 0m23.514s 00:08:16.465 user 1m4.460s 00:08:16.465 sys 0m8.141s 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.465 13:38:42 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.465 ************************************ 00:08:16.465 END TEST nvmf_filesystem 00:08:16.465 ************************************ 00:08:16.465 13:38:42 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:16.465 13:38:42 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:16.465 13:38:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.465 13:38:42 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.465 13:38:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:16.465 ************************************ 00:08:16.465 START TEST nvmf_target_discovery 00:08:16.465 ************************************ 00:08:16.465 13:38:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:16.724 * Looking for test storage... 00:08:16.724 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.724 13:38:43 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.725 13:38:43 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.294 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:23.295 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:23.295 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:23.295 Found net devices under 0000:18:00.0: mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:23.295 Found net devices under 0000:18:00.1: mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:23.295 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.295 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:23.295 altname enp24s0f0np0 00:08:23.295 altname ens785f0np0 00:08:23.295 inet 192.168.100.8/24 scope global mlx_0_0 00:08:23.295 valid_lft forever preferred_lft forever 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:23.295 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.295 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:23.295 altname enp24s0f1np1 00:08:23.295 altname ens785f1np1 00:08:23.295 inet 192.168.100.9/24 scope global mlx_0_1 00:08:23.295 valid_lft forever preferred_lft forever 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.295 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:23.296 192.168.100.9' 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:23.296 192.168.100.9' 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:23.296 192.168.100.9' 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:23.296 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2380171 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2380171 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2380171 ']' 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.555 13:38:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.555 [2024-07-15 13:38:49.905261] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:23.555 [2024-07-15 13:38:49.905316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.555 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.555 [2024-07-15 13:38:49.991754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.815 [2024-07-15 13:38:50.090909] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.815 [2024-07-15 13:38:50.090946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.815 [2024-07-15 13:38:50.090956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.815 [2024-07-15 13:38:50.090964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.815 [2024-07-15 13:38:50.090972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.815 [2024-07-15 13:38:50.091024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.815 [2024-07-15 13:38:50.091062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.815 [2024-07-15 13:38:50.091175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.815 [2024-07-15 13:38:50.091176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.384 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.384 [2024-07-15 13:38:50.791325] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x820180/0x824670) succeed. 00:08:24.384 [2024-07-15 13:38:50.800968] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8217c0/0x865d00) succeed. 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 Null1 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 [2024-07-15 13:38:50.975927] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 Null2 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.643 13:38:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 Null3 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 Null4 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.644 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:24.904 00:08:24.904 Discovery Log Number of Records 6, Generation counter 6 00:08:24.904 =====Discovery Log Entry 0====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: current discovery subsystem 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4420 00:08:24.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: explicit discovery connections, duplicate discovery information 00:08:24.904 rdma_prtype: not specified 00:08:24.904 rdma_qptype: connected 00:08:24.904 rdma_cms: rdma-cm 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 =====Discovery Log Entry 1====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: nvme subsystem 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4420 00:08:24.904 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: none 00:08:24.904 rdma_prtype: not specified 00:08:24.904 rdma_qptype: connected 00:08:24.904 rdma_cms: rdma-cm 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 =====Discovery Log Entry 2====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: nvme subsystem 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4420 00:08:24.904 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: none 00:08:24.904 rdma_prtype: not specified 00:08:24.904 rdma_qptype: connected 00:08:24.904 rdma_cms: rdma-cm 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 =====Discovery Log Entry 3====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: nvme subsystem 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4420 00:08:24.904 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: none 00:08:24.904 rdma_prtype: not specified 00:08:24.904 rdma_qptype: connected 00:08:24.904 rdma_cms: rdma-cm 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 =====Discovery Log Entry 4====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: nvme subsystem 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4420 00:08:24.904 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: none 00:08:24.904 rdma_prtype: not specified 00:08:24.904 rdma_qptype: connected 00:08:24.904 rdma_cms: rdma-cm 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 =====Discovery Log Entry 5====== 00:08:24.904 trtype: rdma 00:08:24.904 adrfam: ipv4 00:08:24.904 subtype: discovery subsystem referral 00:08:24.904 treq: not required 00:08:24.904 portid: 0 00:08:24.904 trsvcid: 4430 00:08:24.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.904 traddr: 192.168.100.8 00:08:24.904 eflags: none 00:08:24.904 rdma_prtype: unrecognized 00:08:24.904 rdma_qptype: unrecognized 00:08:24.904 rdma_cms: unrecognized 00:08:24.904 rdma_pkey: 0x0000 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:24.904 Perform nvmf subsystem discovery via RPC 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.904 [ 00:08:24.904 { 00:08:24.904 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:24.904 "subtype": "Discovery", 00:08:24.904 "listen_addresses": [ 00:08:24.904 { 00:08:24.904 "trtype": "RDMA", 00:08:24.904 "adrfam": "IPv4", 00:08:24.904 "traddr": "192.168.100.8", 00:08:24.904 "trsvcid": "4420" 00:08:24.904 } 00:08:24.904 ], 00:08:24.904 "allow_any_host": true, 00:08:24.904 "hosts": [] 00:08:24.904 }, 00:08:24.904 { 00:08:24.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.904 "subtype": "NVMe", 00:08:24.904 "listen_addresses": [ 00:08:24.904 { 00:08:24.904 "trtype": "RDMA", 00:08:24.904 "adrfam": "IPv4", 00:08:24.904 "traddr": "192.168.100.8", 00:08:24.904 "trsvcid": "4420" 00:08:24.904 } 00:08:24.904 ], 00:08:24.904 "allow_any_host": true, 00:08:24.904 "hosts": [], 00:08:24.904 "serial_number": "SPDK00000000000001", 00:08:24.904 "model_number": "SPDK bdev Controller", 00:08:24.904 "max_namespaces": 32, 00:08:24.904 "min_cntlid": 1, 00:08:24.904 "max_cntlid": 65519, 00:08:24.904 "namespaces": [ 00:08:24.904 { 00:08:24.904 "nsid": 1, 00:08:24.904 "bdev_name": "Null1", 00:08:24.904 "name": "Null1", 00:08:24.904 "nguid": "512C8C7F1F34444BA0349296EB7E9EB1", 00:08:24.904 "uuid": "512c8c7f-1f34-444b-a034-9296eb7e9eb1" 00:08:24.904 } 00:08:24.904 ] 00:08:24.904 }, 00:08:24.904 { 00:08:24.904 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:24.904 "subtype": "NVMe", 00:08:24.904 "listen_addresses": [ 00:08:24.904 { 00:08:24.904 "trtype": "RDMA", 00:08:24.904 "adrfam": "IPv4", 00:08:24.904 "traddr": "192.168.100.8", 00:08:24.904 "trsvcid": "4420" 00:08:24.904 } 00:08:24.904 ], 00:08:24.904 "allow_any_host": true, 00:08:24.904 "hosts": [], 00:08:24.904 "serial_number": "SPDK00000000000002", 00:08:24.904 "model_number": "SPDK bdev Controller", 00:08:24.904 "max_namespaces": 32, 00:08:24.904 "min_cntlid": 1, 00:08:24.904 "max_cntlid": 65519, 00:08:24.904 "namespaces": [ 00:08:24.904 { 00:08:24.904 "nsid": 1, 00:08:24.904 "bdev_name": "Null2", 00:08:24.904 "name": "Null2", 00:08:24.904 "nguid": "1485CCD282B9432E8E8A5C6FF57217BF", 00:08:24.904 "uuid": "1485ccd2-82b9-432e-8e8a-5c6ff57217bf" 00:08:24.904 } 00:08:24.904 ] 00:08:24.904 }, 00:08:24.904 { 00:08:24.904 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:24.904 "subtype": "NVMe", 00:08:24.904 "listen_addresses": [ 00:08:24.904 { 00:08:24.904 "trtype": "RDMA", 00:08:24.904 "adrfam": "IPv4", 00:08:24.904 "traddr": "192.168.100.8", 00:08:24.904 "trsvcid": "4420" 00:08:24.904 } 00:08:24.904 ], 00:08:24.904 "allow_any_host": true, 00:08:24.904 "hosts": [], 00:08:24.904 "serial_number": "SPDK00000000000003", 00:08:24.904 "model_number": "SPDK bdev Controller", 00:08:24.904 "max_namespaces": 32, 00:08:24.904 "min_cntlid": 1, 00:08:24.904 "max_cntlid": 65519, 00:08:24.904 "namespaces": [ 00:08:24.904 { 00:08:24.904 "nsid": 1, 00:08:24.904 "bdev_name": "Null3", 00:08:24.904 "name": "Null3", 00:08:24.904 "nguid": "C538FCCAEF244958B4A93A420EDAFBFA", 00:08:24.904 "uuid": "c538fcca-ef24-4958-b4a9-3a420edafbfa" 00:08:24.904 } 00:08:24.904 ] 00:08:24.904 }, 00:08:24.904 { 00:08:24.904 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:24.904 "subtype": "NVMe", 00:08:24.904 "listen_addresses": [ 00:08:24.904 { 00:08:24.904 "trtype": "RDMA", 00:08:24.904 "adrfam": "IPv4", 00:08:24.904 "traddr": "192.168.100.8", 00:08:24.904 "trsvcid": "4420" 00:08:24.904 } 00:08:24.904 ], 00:08:24.904 "allow_any_host": true, 00:08:24.904 "hosts": [], 00:08:24.904 "serial_number": "SPDK00000000000004", 00:08:24.904 "model_number": "SPDK bdev Controller", 00:08:24.904 "max_namespaces": 32, 00:08:24.904 "min_cntlid": 1, 00:08:24.904 "max_cntlid": 65519, 00:08:24.904 "namespaces": [ 00:08:24.904 { 00:08:24.904 "nsid": 1, 00:08:24.904 "bdev_name": "Null4", 00:08:24.904 "name": "Null4", 00:08:24.904 "nguid": "24E3C654094A4F0B9686F7DC5C53FDCC", 00:08:24.904 "uuid": "24e3c654-094a-4f0b-9686-f7dc5c53fdcc" 00:08:24.904 } 00:08:24.904 ] 00:08:24.904 } 00:08:24.904 ] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:24.904 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:24.905 rmmod nvme_rdma 00:08:24.905 rmmod nvme_fabrics 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2380171 ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2380171 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2380171 ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2380171 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.905 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2380171 00:08:25.164 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.164 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.164 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2380171' 00:08:25.164 killing process with pid 2380171 00:08:25.164 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2380171 00:08:25.164 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2380171 00:08:25.425 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:25.425 13:38:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:25.425 00:08:25.425 real 0m8.832s 00:08:25.425 user 0m8.505s 00:08:25.425 sys 0m5.719s 00:08:25.425 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.425 13:38:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.425 ************************************ 00:08:25.425 END TEST nvmf_target_discovery 00:08:25.425 ************************************ 00:08:25.425 13:38:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:25.425 13:38:51 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:25.425 13:38:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.425 13:38:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.425 13:38:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:25.425 ************************************ 00:08:25.425 START TEST nvmf_referrals 00:08:25.425 ************************************ 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:25.425 * Looking for test storage... 00:08:25.425 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.425 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:25.685 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.686 13:38:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:32.257 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:32.257 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:32.257 Found net devices under 0000:18:00.0: mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:32.257 Found net devices under 0000:18:00.1: mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:32.257 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.257 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:32.257 altname enp24s0f0np0 00:08:32.257 altname ens785f0np0 00:08:32.257 inet 192.168.100.8/24 scope global mlx_0_0 00:08:32.257 valid_lft forever preferred_lft forever 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:32.257 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:32.257 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:32.257 altname enp24s0f1np1 00:08:32.257 altname ens785f1np1 00:08:32.257 inet 192.168.100.9/24 scope global mlx_0_1 00:08:32.257 valid_lft forever preferred_lft forever 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:32.257 192.168.100.9' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:32.257 192.168.100.9' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:32.257 192.168.100.9' 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:32.257 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2383421 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2383421 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2383421 ']' 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.517 13:38:58 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.517 [2024-07-15 13:38:58.871439] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:32.517 [2024-07-15 13:38:58.871499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.517 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.517 [2024-07-15 13:38:58.959123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.775 [2024-07-15 13:38:59.049812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.775 [2024-07-15 13:38:59.049855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.775 [2024-07-15 13:38:59.049864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.775 [2024-07-15 13:38:59.049872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.775 [2024-07-15 13:38:59.049899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.775 [2024-07-15 13:38:59.049970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.775 [2024-07-15 13:38:59.050086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.775 [2024-07-15 13:38:59.050174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.775 [2024-07-15 13:38:59.050175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.342 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.342 [2024-07-15 13:38:59.756342] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x654180/0x658670) succeed. 00:08:33.342 [2024-07-15 13:38:59.765836] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6557c0/0x699d00) succeed. 00:08:33.599 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.599 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 [2024-07-15 13:38:59.899235] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.600 13:38:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.600 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:33.857 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.858 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.116 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.373 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.374 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.630 13:39:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.630 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:34.888 rmmod nvme_rdma 00:08:34.888 rmmod nvme_fabrics 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2383421 ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2383421 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2383421 ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2383421 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2383421 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2383421' 00:08:34.888 killing process with pid 2383421 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2383421 00:08:34.888 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2383421 00:08:35.147 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.147 13:39:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:35.147 00:08:35.147 real 0m9.773s 00:08:35.147 user 0m12.703s 00:08:35.147 sys 0m6.168s 00:08:35.147 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.147 13:39:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.147 ************************************ 00:08:35.147 END TEST nvmf_referrals 00:08:35.147 ************************************ 00:08:35.147 13:39:01 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:35.147 13:39:01 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:35.147 13:39:01 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.147 13:39:01 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.147 13:39:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 ************************************ 00:08:35.405 START TEST nvmf_connect_disconnect 00:08:35.405 ************************************ 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:35.405 * Looking for test storage... 00:08:35.405 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.405 13:39:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.003 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.003 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.003 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.003 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:42.004 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:42.004 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:42.004 Found net devices under 0000:18:00.0: mlx_0_0 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:42.004 Found net devices under 0000:18:00.1: mlx_0_1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:42.004 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:42.004 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:08:42.004 altname enp24s0f0np0 00:08:42.004 altname ens785f0np0 00:08:42.004 inet 192.168.100.8/24 scope global mlx_0_0 00:08:42.004 valid_lft forever preferred_lft forever 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:42.004 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:42.004 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:42.004 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:08:42.004 altname enp24s0f1np1 00:08:42.004 altname ens785f1np1 00:08:42.004 inet 192.168.100.9/24 scope global mlx_0_1 00:08:42.004 valid_lft forever preferred_lft forever 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:42.005 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:42.264 192.168.100.9' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:42.264 192.168.100.9' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:42.264 192.168.100.9' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2387273 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2387273 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2387273 ']' 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.264 13:39:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.264 [2024-07-15 13:39:08.680798] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:08:42.264 [2024-07-15 13:39:08.680859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.264 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.264 [2024-07-15 13:39:08.767841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.522 [2024-07-15 13:39:08.857355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.522 [2024-07-15 13:39:08.857402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.522 [2024-07-15 13:39:08.857413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.523 [2024-07-15 13:39:08.857421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.523 [2024-07-15 13:39:08.857428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.523 [2024-07-15 13:39:08.857541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.523 [2024-07-15 13:39:08.857664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.523 [2024-07-15 13:39:08.857702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.523 [2024-07-15 13:39:08.857703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.091 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.091 [2024-07-15 13:39:09.550619] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:43.091 [2024-07-15 13:39:09.571182] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaee180/0xaf2670) succeed. 00:08:43.091 [2024-07-15 13:39:09.580646] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xaef7c0/0xb33d00) succeed. 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.351 [2024-07-15 13:39:09.727705] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:43.351 13:39:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:47.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:03.442 rmmod nvme_rdma 00:09:03.442 rmmod nvme_fabrics 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2387273 ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2387273 ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2387273' 00:09:03.442 killing process with pid 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2387273 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:03.442 00:09:03.442 real 0m28.268s 00:09:03.442 user 1m25.741s 00:09:03.442 sys 0m6.350s 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.442 13:39:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:03.442 ************************************ 00:09:03.442 END TEST nvmf_connect_disconnect 00:09:03.442 ************************************ 00:09:03.702 13:39:30 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:03.702 13:39:30 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:03.702 13:39:30 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.702 13:39:30 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.702 13:39:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:03.702 ************************************ 00:09:03.702 START TEST nvmf_multitarget 00:09:03.702 ************************************ 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:03.702 * Looking for test storage... 00:09:03.702 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.702 13:39:30 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:10.283 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:10.283 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:10.283 Found net devices under 0000:18:00.0: mlx_0_0 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:10.283 Found net devices under 0000:18:00.1: mlx_0_1 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:10.283 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:10.543 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:10.543 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:10.543 altname enp24s0f0np0 00:09:10.543 altname ens785f0np0 00:09:10.543 inet 192.168.100.8/24 scope global mlx_0_0 00:09:10.543 valid_lft forever preferred_lft forever 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:10.543 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:10.543 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:10.543 altname enp24s0f1np1 00:09:10.543 altname ens785f1np1 00:09:10.543 inet 192.168.100.9/24 scope global mlx_0_1 00:09:10.543 valid_lft forever preferred_lft forever 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.543 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:10.544 192.168.100.9' 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:10.544 192.168.100.9' 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:09:10.544 13:39:36 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:10.544 192.168.100.9' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2393069 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2393069 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2393069 ']' 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.544 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.803 [2024-07-15 13:39:37.092667] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:10.803 [2024-07-15 13:39:37.092726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.803 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.803 [2024-07-15 13:39:37.182089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.803 [2024-07-15 13:39:37.273052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.803 [2024-07-15 13:39:37.273100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.803 [2024-07-15 13:39:37.273109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.803 [2024-07-15 13:39:37.273117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.803 [2024-07-15 13:39:37.273124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.803 [2024-07-15 13:39:37.273241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.803 [2024-07-15 13:39:37.273379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.803 [2024-07-15 13:39:37.273484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.803 [2024-07-15 13:39:37.273485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.740 13:39:37 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:11.740 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:11.740 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:11.740 "nvmf_tgt_1" 00:09:11.740 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:12.000 "nvmf_tgt_2" 00:09:12.000 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:12.000 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:12.000 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:12.000 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:12.000 true 00:09:12.000 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:12.260 true 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:12.260 rmmod nvme_rdma 00:09:12.260 rmmod nvme_fabrics 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2393069 ']' 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2393069 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2393069 ']' 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2393069 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:12.260 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2393069 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2393069' 00:09:12.519 killing process with pid 2393069 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2393069 00:09:12.519 13:39:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2393069 00:09:12.519 13:39:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.519 13:39:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:12.519 00:09:12.519 real 0m8.983s 00:09:12.519 user 0m9.641s 00:09:12.519 sys 0m5.839s 00:09:12.519 13:39:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.519 13:39:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:12.519 ************************************ 00:09:12.519 END TEST nvmf_multitarget 00:09:12.519 ************************************ 00:09:12.778 13:39:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:12.778 13:39:39 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:12.778 13:39:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.778 13:39:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.778 13:39:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:12.778 ************************************ 00:09:12.778 START TEST nvmf_rpc 00:09:12.778 ************************************ 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:12.778 * Looking for test storage... 00:09:12.778 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.778 13:39:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.779 13:39:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.463 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.463 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:19.464 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:19.464 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:19.464 Found net devices under 0000:18:00.0: mlx_0_0 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:19.464 Found net devices under 0000:18:00.1: mlx_0_1 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:19.464 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:19.464 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:19.464 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:19.464 altname enp24s0f0np0 00:09:19.464 altname ens785f0np0 00:09:19.464 inet 192.168.100.8/24 scope global mlx_0_0 00:09:19.464 valid_lft forever preferred_lft forever 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:19.465 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:19.465 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:19.465 altname enp24s0f1np1 00:09:19.465 altname ens785f1np1 00:09:19.465 inet 192.168.100.9/24 scope global mlx_0_1 00:09:19.465 valid_lft forever preferred_lft forever 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:19.465 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:19.465 192.168.100.9' 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:19.723 192.168.100.9' 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:19.723 192.168.100.9' 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:19.723 13:39:45 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2396197 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2396197 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2396197 ']' 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.723 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.723 [2024-07-15 13:39:46.090693] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:19.723 [2024-07-15 13:39:46.090760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.723 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.723 [2024-07-15 13:39:46.177386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.981 [2024-07-15 13:39:46.275428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.981 [2024-07-15 13:39:46.275464] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.981 [2024-07-15 13:39:46.275474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.981 [2024-07-15 13:39:46.275483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.981 [2024-07-15 13:39:46.275490] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.981 [2024-07-15 13:39:46.275557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.981 [2024-07-15 13:39:46.275875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.981 [2024-07-15 13:39:46.275910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.981 [2024-07-15 13:39:46.275911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:20.545 "tick_rate": 2300000000, 00:09:20.545 "poll_groups": [ 00:09:20.545 { 00:09:20.545 "name": "nvmf_tgt_poll_group_000", 00:09:20.545 "admin_qpairs": 0, 00:09:20.545 "io_qpairs": 0, 00:09:20.545 "current_admin_qpairs": 0, 00:09:20.545 "current_io_qpairs": 0, 00:09:20.545 "pending_bdev_io": 0, 00:09:20.545 "completed_nvme_io": 0, 00:09:20.545 "transports": [] 00:09:20.545 }, 00:09:20.545 { 00:09:20.545 "name": "nvmf_tgt_poll_group_001", 00:09:20.545 "admin_qpairs": 0, 00:09:20.545 "io_qpairs": 0, 00:09:20.545 "current_admin_qpairs": 0, 00:09:20.545 "current_io_qpairs": 0, 00:09:20.545 "pending_bdev_io": 0, 00:09:20.545 "completed_nvme_io": 0, 00:09:20.545 "transports": [] 00:09:20.545 }, 00:09:20.545 { 00:09:20.545 "name": "nvmf_tgt_poll_group_002", 00:09:20.545 "admin_qpairs": 0, 00:09:20.545 "io_qpairs": 0, 00:09:20.545 "current_admin_qpairs": 0, 00:09:20.545 "current_io_qpairs": 0, 00:09:20.545 "pending_bdev_io": 0, 00:09:20.545 "completed_nvme_io": 0, 00:09:20.545 "transports": [] 00:09:20.545 }, 00:09:20.545 { 00:09:20.545 "name": "nvmf_tgt_poll_group_003", 00:09:20.545 "admin_qpairs": 0, 00:09:20.545 "io_qpairs": 0, 00:09:20.545 "current_admin_qpairs": 0, 00:09:20.545 "current_io_qpairs": 0, 00:09:20.545 "pending_bdev_io": 0, 00:09:20.545 "completed_nvme_io": 0, 00:09:20.545 "transports": [] 00:09:20.545 } 00:09:20.545 ] 00:09:20.545 }' 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:20.545 13:39:46 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:20.545 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:20.545 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 [2024-07-15 13:39:47.105837] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16d61e0/0x16da6d0) succeed. 00:09:20.803 [2024-07-15 13:39:47.116641] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16d7820/0x171bd60) succeed. 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.803 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:20.803 "tick_rate": 2300000000, 00:09:20.803 "poll_groups": [ 00:09:20.803 { 00:09:20.803 "name": "nvmf_tgt_poll_group_000", 00:09:20.803 "admin_qpairs": 0, 00:09:20.803 "io_qpairs": 0, 00:09:20.803 "current_admin_qpairs": 0, 00:09:20.803 "current_io_qpairs": 0, 00:09:20.803 "pending_bdev_io": 0, 00:09:20.803 "completed_nvme_io": 0, 00:09:20.803 "transports": [ 00:09:20.803 { 00:09:20.803 "trtype": "RDMA", 00:09:20.803 "pending_data_buffer": 0, 00:09:20.803 "devices": [ 00:09:20.803 { 00:09:20.803 "name": "mlx5_0", 00:09:20.803 "polls": 16242, 00:09:20.803 "idle_polls": 16242, 00:09:20.803 "completions": 0, 00:09:20.803 "requests": 0, 00:09:20.803 "request_latency": 0, 00:09:20.803 "pending_free_request": 0, 00:09:20.803 "pending_rdma_read": 0, 00:09:20.803 "pending_rdma_write": 0, 00:09:20.803 "pending_rdma_send": 0, 00:09:20.803 "total_send_wrs": 0, 00:09:20.803 "send_doorbell_updates": 0, 00:09:20.803 "total_recv_wrs": 4096, 00:09:20.803 "recv_doorbell_updates": 1 00:09:20.803 }, 00:09:20.803 { 00:09:20.803 "name": "mlx5_1", 00:09:20.803 "polls": 16242, 00:09:20.803 "idle_polls": 16242, 00:09:20.803 "completions": 0, 00:09:20.803 "requests": 0, 00:09:20.803 "request_latency": 0, 00:09:20.803 "pending_free_request": 0, 00:09:20.803 "pending_rdma_read": 0, 00:09:20.803 "pending_rdma_write": 0, 00:09:20.803 "pending_rdma_send": 0, 00:09:20.803 "total_send_wrs": 0, 00:09:20.803 "send_doorbell_updates": 0, 00:09:20.803 "total_recv_wrs": 4096, 00:09:20.803 "recv_doorbell_updates": 1 00:09:20.803 } 00:09:20.803 ] 00:09:20.803 } 00:09:20.803 ] 00:09:20.803 }, 00:09:20.803 { 00:09:20.803 "name": "nvmf_tgt_poll_group_001", 00:09:20.803 "admin_qpairs": 0, 00:09:20.803 "io_qpairs": 0, 00:09:20.803 "current_admin_qpairs": 0, 00:09:20.803 "current_io_qpairs": 0, 00:09:20.803 "pending_bdev_io": 0, 00:09:20.803 "completed_nvme_io": 0, 00:09:20.803 "transports": [ 00:09:20.803 { 00:09:20.803 "trtype": "RDMA", 00:09:20.803 "pending_data_buffer": 0, 00:09:20.803 "devices": [ 00:09:20.803 { 00:09:20.803 "name": "mlx5_0", 00:09:20.803 "polls": 10366, 00:09:20.803 "idle_polls": 10366, 00:09:20.803 "completions": 0, 00:09:20.803 "requests": 0, 00:09:20.803 "request_latency": 0, 00:09:20.803 "pending_free_request": 0, 00:09:20.803 "pending_rdma_read": 0, 00:09:20.803 "pending_rdma_write": 0, 00:09:20.803 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 }, 00:09:20.804 { 00:09:20.804 "name": "mlx5_1", 00:09:20.804 "polls": 10366, 00:09:20.804 "idle_polls": 10366, 00:09:20.804 "completions": 0, 00:09:20.804 "requests": 0, 00:09:20.804 "request_latency": 0, 00:09:20.804 "pending_free_request": 0, 00:09:20.804 "pending_rdma_read": 0, 00:09:20.804 "pending_rdma_write": 0, 00:09:20.804 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 }, 00:09:20.804 { 00:09:20.804 "name": "nvmf_tgt_poll_group_002", 00:09:20.804 "admin_qpairs": 0, 00:09:20.804 "io_qpairs": 0, 00:09:20.804 "current_admin_qpairs": 0, 00:09:20.804 "current_io_qpairs": 0, 00:09:20.804 "pending_bdev_io": 0, 00:09:20.804 "completed_nvme_io": 0, 00:09:20.804 "transports": [ 00:09:20.804 { 00:09:20.804 "trtype": "RDMA", 00:09:20.804 "pending_data_buffer": 0, 00:09:20.804 "devices": [ 00:09:20.804 { 00:09:20.804 "name": "mlx5_0", 00:09:20.804 "polls": 5698, 00:09:20.804 "idle_polls": 5698, 00:09:20.804 "completions": 0, 00:09:20.804 "requests": 0, 00:09:20.804 "request_latency": 0, 00:09:20.804 "pending_free_request": 0, 00:09:20.804 "pending_rdma_read": 0, 00:09:20.804 "pending_rdma_write": 0, 00:09:20.804 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 }, 00:09:20.804 { 00:09:20.804 "name": "mlx5_1", 00:09:20.804 "polls": 5698, 00:09:20.804 "idle_polls": 5698, 00:09:20.804 "completions": 0, 00:09:20.804 "requests": 0, 00:09:20.804 "request_latency": 0, 00:09:20.804 "pending_free_request": 0, 00:09:20.804 "pending_rdma_read": 0, 00:09:20.804 "pending_rdma_write": 0, 00:09:20.804 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 }, 00:09:20.804 { 00:09:20.804 "name": "nvmf_tgt_poll_group_003", 00:09:20.804 "admin_qpairs": 0, 00:09:20.804 "io_qpairs": 0, 00:09:20.804 "current_admin_qpairs": 0, 00:09:20.804 "current_io_qpairs": 0, 00:09:20.804 "pending_bdev_io": 0, 00:09:20.804 "completed_nvme_io": 0, 00:09:20.804 "transports": [ 00:09:20.804 { 00:09:20.804 "trtype": "RDMA", 00:09:20.804 "pending_data_buffer": 0, 00:09:20.804 "devices": [ 00:09:20.804 { 00:09:20.804 "name": "mlx5_0", 00:09:20.804 "polls": 872, 00:09:20.804 "idle_polls": 872, 00:09:20.804 "completions": 0, 00:09:20.804 "requests": 0, 00:09:20.804 "request_latency": 0, 00:09:20.804 "pending_free_request": 0, 00:09:20.804 "pending_rdma_read": 0, 00:09:20.804 "pending_rdma_write": 0, 00:09:20.804 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 }, 00:09:20.804 { 00:09:20.804 "name": "mlx5_1", 00:09:20.804 "polls": 872, 00:09:20.804 "idle_polls": 872, 00:09:20.804 "completions": 0, 00:09:20.804 "requests": 0, 00:09:20.804 "request_latency": 0, 00:09:20.804 "pending_free_request": 0, 00:09:20.804 "pending_rdma_read": 0, 00:09:20.804 "pending_rdma_write": 0, 00:09:20.804 "pending_rdma_send": 0, 00:09:20.804 "total_send_wrs": 0, 00:09:20.804 "send_doorbell_updates": 0, 00:09:20.804 "total_recv_wrs": 4096, 00:09:20.804 "recv_doorbell_updates": 1 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 } 00:09:20.804 ] 00:09:20.804 }' 00:09:20.804 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:20.804 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:20.804 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:20.804 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:21.063 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.064 Malloc1 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.064 [2024-07-15 13:39:47.573340] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:21.064 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:21.323 [2024-07-15 13:39:47.619016] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:09:21.323 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:21.323 could not add new controller: failed to write to nvme-fabrics device 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.323 13:39:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:22.258 13:39:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.258 13:39:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.258 13:39:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.258 13:39:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.258 13:39:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.155 13:39:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.087 13:39:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.087 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.087 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.087 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.346 [2024-07-15 13:39:51.690816] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562' 00:09:25.346 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:25.346 could not add new controller: failed to write to nvme-fabrics device 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:25.346 13:39:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:25.347 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.347 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.347 13:39:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.347 13:39:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:26.283 13:39:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.283 13:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.283 13:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.283 13:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.283 13:39:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:28.814 13:39:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.381 [2024-07-15 13:39:55.745092] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.381 13:39:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:30.316 13:39:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.316 13:39:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.316 13:39:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.316 13:39:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.316 13:39:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:32.851 13:39:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 [2024-07-15 13:39:59.810398] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.419 13:39:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:34.357 13:40:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:34.357 13:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:34.357 13:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.357 13:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:34.357 13:40:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:36.898 13:40:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.466 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 [2024-07-15 13:40:03.851503] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.467 13:40:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:38.403 13:40:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.403 13:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.403 13:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.403 13:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:38.403 13:40:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.941 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:40.942 13:40:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 [2024-07-15 13:40:07.914284] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.510 13:40:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:42.517 13:40:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.517 13:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:42.517 13:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.517 13:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:42.517 13:40:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:44.424 13:40:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 [2024-07-15 13:40:11.965419] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:45.802 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.803 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.803 13:40:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.803 13:40:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:46.739 13:40:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.739 13:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:46.739 13:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.739 13:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:46.739 13:40:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:48.642 13:40:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 [2024-07-15 13:40:16.011522] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.723 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 [2024-07-15 13:40:16.059706] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 [2024-07-15 13:40:16.111896] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 [2024-07-15 13:40:16.160056] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 [2024-07-15 13:40:16.208273] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.724 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.982 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.982 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:49.982 "tick_rate": 2300000000, 00:09:49.982 "poll_groups": [ 00:09:49.982 { 00:09:49.982 "name": "nvmf_tgt_poll_group_000", 00:09:49.983 "admin_qpairs": 2, 00:09:49.983 "io_qpairs": 27, 00:09:49.983 "current_admin_qpairs": 0, 00:09:49.983 "current_io_qpairs": 0, 00:09:49.983 "pending_bdev_io": 0, 00:09:49.983 "completed_nvme_io": 175, 00:09:49.983 "transports": [ 00:09:49.983 { 00:09:49.983 "trtype": "RDMA", 00:09:49.983 "pending_data_buffer": 0, 00:09:49.983 "devices": [ 00:09:49.983 { 00:09:49.983 "name": "mlx5_0", 00:09:49.983 "polls": 3443106, 00:09:49.983 "idle_polls": 3442701, 00:09:49.983 "completions": 465, 00:09:49.983 "requests": 232, 00:09:49.983 "request_latency": 48461812, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 407, 00:09:49.983 "send_doorbell_updates": 197, 00:09:49.983 "total_recv_wrs": 4328, 00:09:49.983 "recv_doorbell_updates": 197 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "mlx5_1", 00:09:49.983 "polls": 3443106, 00:09:49.983 "idle_polls": 3443106, 00:09:49.983 "completions": 0, 00:09:49.983 "requests": 0, 00:09:49.983 "request_latency": 0, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 0, 00:09:49.983 "send_doorbell_updates": 0, 00:09:49.983 "total_recv_wrs": 4096, 00:09:49.983 "recv_doorbell_updates": 1 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "nvmf_tgt_poll_group_001", 00:09:49.983 "admin_qpairs": 2, 00:09:49.983 "io_qpairs": 26, 00:09:49.983 "current_admin_qpairs": 0, 00:09:49.983 "current_io_qpairs": 0, 00:09:49.983 "pending_bdev_io": 0, 00:09:49.983 "completed_nvme_io": 81, 00:09:49.983 "transports": [ 00:09:49.983 { 00:09:49.983 "trtype": "RDMA", 00:09:49.983 "pending_data_buffer": 0, 00:09:49.983 "devices": [ 00:09:49.983 { 00:09:49.983 "name": "mlx5_0", 00:09:49.983 "polls": 3490036, 00:09:49.983 "idle_polls": 3489789, 00:09:49.983 "completions": 272, 00:09:49.983 "requests": 136, 00:09:49.983 "request_latency": 23682896, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 216, 00:09:49.983 "send_doorbell_updates": 122, 00:09:49.983 "total_recv_wrs": 4232, 00:09:49.983 "recv_doorbell_updates": 123 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "mlx5_1", 00:09:49.983 "polls": 3490036, 00:09:49.983 "idle_polls": 3490036, 00:09:49.983 "completions": 0, 00:09:49.983 "requests": 0, 00:09:49.983 "request_latency": 0, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 0, 00:09:49.983 "send_doorbell_updates": 0, 00:09:49.983 "total_recv_wrs": 4096, 00:09:49.983 "recv_doorbell_updates": 1 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "nvmf_tgt_poll_group_002", 00:09:49.983 "admin_qpairs": 1, 00:09:49.983 "io_qpairs": 26, 00:09:49.983 "current_admin_qpairs": 0, 00:09:49.983 "current_io_qpairs": 0, 00:09:49.983 "pending_bdev_io": 0, 00:09:49.983 "completed_nvme_io": 120, 00:09:49.983 "transports": [ 00:09:49.983 { 00:09:49.983 "trtype": "RDMA", 00:09:49.983 "pending_data_buffer": 0, 00:09:49.983 "devices": [ 00:09:49.983 { 00:09:49.983 "name": "mlx5_0", 00:09:49.983 "polls": 3534806, 00:09:49.983 "idle_polls": 3534543, 00:09:49.983 "completions": 297, 00:09:49.983 "requests": 148, 00:09:49.983 "request_latency": 29720926, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 256, 00:09:49.983 "send_doorbell_updates": 130, 00:09:49.983 "total_recv_wrs": 4244, 00:09:49.983 "recv_doorbell_updates": 130 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "mlx5_1", 00:09:49.983 "polls": 3534806, 00:09:49.983 "idle_polls": 3534806, 00:09:49.983 "completions": 0, 00:09:49.983 "requests": 0, 00:09:49.983 "request_latency": 0, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 0, 00:09:49.983 "send_doorbell_updates": 0, 00:09:49.983 "total_recv_wrs": 4096, 00:09:49.983 "recv_doorbell_updates": 1 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "nvmf_tgt_poll_group_003", 00:09:49.983 "admin_qpairs": 2, 00:09:49.983 "io_qpairs": 26, 00:09:49.983 "current_admin_qpairs": 0, 00:09:49.983 "current_io_qpairs": 0, 00:09:49.983 "pending_bdev_io": 0, 00:09:49.983 "completed_nvme_io": 79, 00:09:49.983 "transports": [ 00:09:49.983 { 00:09:49.983 "trtype": "RDMA", 00:09:49.983 "pending_data_buffer": 0, 00:09:49.983 "devices": [ 00:09:49.983 { 00:09:49.983 "name": "mlx5_0", 00:09:49.983 "polls": 2740063, 00:09:49.983 "idle_polls": 2739820, 00:09:49.983 "completions": 266, 00:09:49.983 "requests": 133, 00:09:49.983 "request_latency": 24035614, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 211, 00:09:49.983 "send_doorbell_updates": 121, 00:09:49.983 "total_recv_wrs": 4229, 00:09:49.983 "recv_doorbell_updates": 122 00:09:49.983 }, 00:09:49.983 { 00:09:49.983 "name": "mlx5_1", 00:09:49.983 "polls": 2740063, 00:09:49.983 "idle_polls": 2740063, 00:09:49.983 "completions": 0, 00:09:49.983 "requests": 0, 00:09:49.983 "request_latency": 0, 00:09:49.983 "pending_free_request": 0, 00:09:49.983 "pending_rdma_read": 0, 00:09:49.983 "pending_rdma_write": 0, 00:09:49.983 "pending_rdma_send": 0, 00:09:49.983 "total_send_wrs": 0, 00:09:49.983 "send_doorbell_updates": 0, 00:09:49.983 "total_recv_wrs": 4096, 00:09:49.983 "recv_doorbell_updates": 1 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 } 00:09:49.983 ] 00:09:49.983 }' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 125901248 > 0 )) 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.983 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:49.983 rmmod nvme_rdma 00:09:49.983 rmmod nvme_fabrics 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2396197 ']' 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2396197 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2396197 ']' 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2396197 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2396197 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2396197' 00:09:50.242 killing process with pid 2396197 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2396197 00:09:50.242 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2396197 00:09:50.500 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.500 13:40:16 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:50.500 00:09:50.500 real 0m37.799s 00:09:50.501 user 2m4.114s 00:09:50.501 sys 0m7.015s 00:09:50.501 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.501 13:40:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.501 ************************************ 00:09:50.501 END TEST nvmf_rpc 00:09:50.501 ************************************ 00:09:50.501 13:40:16 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:50.501 13:40:16 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:50.501 13:40:16 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.501 13:40:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.501 13:40:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:50.501 ************************************ 00:09:50.501 START TEST nvmf_invalid 00:09:50.501 ************************************ 00:09:50.501 13:40:16 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:50.760 * Looking for test storage... 00:09:50.760 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.760 13:40:17 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:57.330 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:57.330 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:57.330 Found net devices under 0000:18:00.0: mlx_0_0 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:57.330 Found net devices under 0000:18:00.1: mlx_0_1 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:57.330 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:57.331 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.331 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:09:57.331 altname enp24s0f0np0 00:09:57.331 altname ens785f0np0 00:09:57.331 inet 192.168.100.8/24 scope global mlx_0_0 00:09:57.331 valid_lft forever preferred_lft forever 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:57.331 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.331 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:09:57.331 altname enp24s0f1np1 00:09:57.331 altname ens785f1np1 00:09:57.331 inet 192.168.100.9/24 scope global mlx_0_1 00:09:57.331 valid_lft forever preferred_lft forever 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:57.331 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:57.591 192.168.100.9' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:57.591 192.168.100.9' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:57.591 192.168.100.9' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2403366 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2403366 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2403366 ']' 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.591 13:40:23 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:57.591 [2024-07-15 13:40:23.996973] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:09:57.591 [2024-07-15 13:40:23.997034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.591 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.591 [2024-07-15 13:40:24.086447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.851 [2024-07-15 13:40:24.183288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.851 [2024-07-15 13:40:24.183333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.851 [2024-07-15 13:40:24.183343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.851 [2024-07-15 13:40:24.183352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.851 [2024-07-15 13:40:24.183359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.851 [2024-07-15 13:40:24.183429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.851 [2024-07-15 13:40:24.183533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.851 [2024-07-15 13:40:24.183634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.851 [2024-07-15 13:40:24.183634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:58.420 13:40:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10151 00:09:58.679 [2024-07-15 13:40:25.028910] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:58.679 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:58.679 { 00:09:58.679 "nqn": "nqn.2016-06.io.spdk:cnode10151", 00:09:58.679 "tgt_name": "foobar", 00:09:58.679 "method": "nvmf_create_subsystem", 00:09:58.679 "req_id": 1 00:09:58.679 } 00:09:58.679 Got JSON-RPC error response 00:09:58.679 response: 00:09:58.679 { 00:09:58.679 "code": -32603, 00:09:58.679 "message": "Unable to find target foobar" 00:09:58.679 }' 00:09:58.679 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:58.679 { 00:09:58.679 "nqn": "nqn.2016-06.io.spdk:cnode10151", 00:09:58.679 "tgt_name": "foobar", 00:09:58.679 "method": "nvmf_create_subsystem", 00:09:58.679 "req_id": 1 00:09:58.679 } 00:09:58.679 Got JSON-RPC error response 00:09:58.679 response: 00:09:58.679 { 00:09:58.679 "code": -32603, 00:09:58.679 "message": "Unable to find target foobar" 00:09:58.679 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:58.679 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:58.679 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1615 00:09:58.938 [2024-07-15 13:40:25.225650] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1615: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:58.938 { 00:09:58.938 "nqn": "nqn.2016-06.io.spdk:cnode1615", 00:09:58.938 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:58.938 "method": "nvmf_create_subsystem", 00:09:58.938 "req_id": 1 00:09:58.938 } 00:09:58.938 Got JSON-RPC error response 00:09:58.938 response: 00:09:58.938 { 00:09:58.938 "code": -32602, 00:09:58.938 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:58.938 }' 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:58.938 { 00:09:58.938 "nqn": "nqn.2016-06.io.spdk:cnode1615", 00:09:58.938 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:58.938 "method": "nvmf_create_subsystem", 00:09:58.938 "req_id": 1 00:09:58.938 } 00:09:58.938 Got JSON-RPC error response 00:09:58.938 response: 00:09:58.938 { 00:09:58.938 "code": -32602, 00:09:58.938 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:58.938 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10519 00:09:58.938 [2024-07-15 13:40:25.430285] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10519: invalid model number 'SPDK_Controller' 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:58.938 { 00:09:58.938 "nqn": "nqn.2016-06.io.spdk:cnode10519", 00:09:58.938 "model_number": "SPDK_Controller\u001f", 00:09:58.938 "method": "nvmf_create_subsystem", 00:09:58.938 "req_id": 1 00:09:58.938 } 00:09:58.938 Got JSON-RPC error response 00:09:58.938 response: 00:09:58.938 { 00:09:58.938 "code": -32602, 00:09:58.938 "message": "Invalid MN SPDK_Controller\u001f" 00:09:58.938 }' 00:09:58.938 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:58.938 { 00:09:58.938 "nqn": "nqn.2016-06.io.spdk:cnode10519", 00:09:58.938 "model_number": "SPDK_Controller\u001f", 00:09:58.938 "method": "nvmf_create_subsystem", 00:09:58.938 "req_id": 1 00:09:58.938 } 00:09:58.938 Got JSON-RPC error response 00:09:58.938 response: 00:09:58.938 { 00:09:58.938 "code": -32602, 00:09:58.938 "message": "Invalid MN SPDK_Controller\u001f" 00:09:58.938 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:59.198 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J"eVjo Rqa9H]Kr"iuUi' 00:09:59.199 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'J"eVjo Rqa9H]Kr"iuUi' nqn.2016-06.io.spdk:cnode29195 00:09:59.459 [2024-07-15 13:40:25.787488] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29195: invalid serial number 'J"eVjo Rqa9H]Kr"iuUi' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:59.459 { 00:09:59.459 "nqn": "nqn.2016-06.io.spdk:cnode29195", 00:09:59.459 "serial_number": "J\"eVjo\u007f Rqa9H]Kr\"iuUi", 00:09:59.459 "method": "nvmf_create_subsystem", 00:09:59.459 "req_id": 1 00:09:59.459 } 00:09:59.459 Got JSON-RPC error response 00:09:59.459 response: 00:09:59.459 { 00:09:59.459 "code": -32602, 00:09:59.459 "message": "Invalid SN J\"eVjo\u007f Rqa9H]Kr\"iuUi" 00:09:59.459 }' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:59.459 { 00:09:59.459 "nqn": "nqn.2016-06.io.spdk:cnode29195", 00:09:59.459 "serial_number": "J\"eVjo\u007f Rqa9H]Kr\"iuUi", 00:09:59.459 "method": "nvmf_create_subsystem", 00:09:59.459 "req_id": 1 00:09:59.459 } 00:09:59.459 Got JSON-RPC error response 00:09:59.459 response: 00:09:59.459 { 00:09:59.459 "code": -32602, 00:09:59.459 "message": "Invalid SN J\"eVjo\u007f Rqa9H]Kr\"iuUi" 00:09:59.459 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:59.459 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.460 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.720 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '^WG/t66=8VEL}x$)JL2 \d .2;/"=)AJe+vLCFaSX' 00:09:59.721 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '^WG/t66=8VEL}x$)JL2 \d .2;/"=)AJe+vLCFaSX' nqn.2016-06.io.spdk:cnode31280 00:09:59.980 [2024-07-15 13:40:26.301195] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31280: invalid model number '^WG/t66=8VEL}x$)JL2 \d .2;/"=)AJe+vLCFaSX' 00:09:59.980 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:59.980 { 00:09:59.980 "nqn": "nqn.2016-06.io.spdk:cnode31280", 00:09:59.980 "model_number": "^WG/t66=8VEL}x$)JL2 \\d .2;/\"=)AJe+vLCFaSX", 00:09:59.980 "method": "nvmf_create_subsystem", 00:09:59.980 "req_id": 1 00:09:59.980 } 00:09:59.980 Got JSON-RPC error response 00:09:59.980 response: 00:09:59.980 { 00:09:59.980 "code": -32602, 00:09:59.980 "message": "Invalid MN ^WG/t66=8VEL}x$)JL2 \\d .2;/\"=)AJe+vLCFaSX" 00:09:59.980 }' 00:09:59.980 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:59.980 { 00:09:59.980 "nqn": "nqn.2016-06.io.spdk:cnode31280", 00:09:59.980 "model_number": "^WG/t66=8VEL}x$)JL2 \\d .2;/\"=)AJe+vLCFaSX", 00:09:59.980 "method": "nvmf_create_subsystem", 00:09:59.980 "req_id": 1 00:09:59.980 } 00:09:59.980 Got JSON-RPC error response 00:09:59.980 response: 00:09:59.980 { 00:09:59.980 "code": -32602, 00:09:59.980 "message": "Invalid MN ^WG/t66=8VEL}x$)JL2 \\d .2;/\"=)AJe+vLCFaSX" 00:09:59.980 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:59.980 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:10:00.239 [2024-07-15 13:40:26.511434] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc14aa0/0xc18f90) succeed. 00:10:00.239 [2024-07-15 13:40:26.520887] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc160e0/0xc5a620) succeed. 00:10:00.239 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:00.498 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:10:00.498 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:10:00.498 192.168.100.9' 00:10:00.498 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:00.498 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:10:00.498 13:40:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:10:00.758 [2024-07-15 13:40:27.049263] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:00.758 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:00.758 { 00:10:00.758 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.758 "listen_address": { 00:10:00.758 "trtype": "rdma", 00:10:00.758 "traddr": "192.168.100.8", 00:10:00.758 "trsvcid": "4421" 00:10:00.758 }, 00:10:00.758 "method": "nvmf_subsystem_remove_listener", 00:10:00.758 "req_id": 1 00:10:00.758 } 00:10:00.758 Got JSON-RPC error response 00:10:00.758 response: 00:10:00.758 { 00:10:00.758 "code": -32602, 00:10:00.758 "message": "Invalid parameters" 00:10:00.758 }' 00:10:00.758 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:00.758 { 00:10:00.758 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:00.758 "listen_address": { 00:10:00.758 "trtype": "rdma", 00:10:00.758 "traddr": "192.168.100.8", 00:10:00.758 "trsvcid": "4421" 00:10:00.758 }, 00:10:00.758 "method": "nvmf_subsystem_remove_listener", 00:10:00.758 "req_id": 1 00:10:00.758 } 00:10:00.758 Got JSON-RPC error response 00:10:00.758 response: 00:10:00.758 { 00:10:00.758 "code": -32602, 00:10:00.758 "message": "Invalid parameters" 00:10:00.758 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:00.758 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6288 -i 0 00:10:00.758 [2024-07-15 13:40:27.229892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6288: invalid cntlid range [0-65519] 00:10:00.758 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:00.758 { 00:10:00.758 "nqn": "nqn.2016-06.io.spdk:cnode6288", 00:10:00.758 "min_cntlid": 0, 00:10:00.758 "method": "nvmf_create_subsystem", 00:10:00.758 "req_id": 1 00:10:00.758 } 00:10:00.758 Got JSON-RPC error response 00:10:00.758 response: 00:10:00.758 { 00:10:00.758 "code": -32602, 00:10:00.758 "message": "Invalid cntlid range [0-65519]" 00:10:00.758 }' 00:10:00.758 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:00.758 { 00:10:00.758 "nqn": "nqn.2016-06.io.spdk:cnode6288", 00:10:00.758 "min_cntlid": 0, 00:10:00.758 "method": "nvmf_create_subsystem", 00:10:00.758 "req_id": 1 00:10:00.758 } 00:10:00.758 Got JSON-RPC error response 00:10:00.758 response: 00:10:00.758 { 00:10:00.759 "code": -32602, 00:10:00.759 "message": "Invalid cntlid range [0-65519]" 00:10:00.759 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:00.759 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23255 -i 65520 00:10:01.018 [2024-07-15 13:40:27.418577] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23255: invalid cntlid range [65520-65519] 00:10:01.018 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:01.018 { 00:10:01.018 "nqn": "nqn.2016-06.io.spdk:cnode23255", 00:10:01.018 "min_cntlid": 65520, 00:10:01.018 "method": "nvmf_create_subsystem", 00:10:01.018 "req_id": 1 00:10:01.018 } 00:10:01.018 Got JSON-RPC error response 00:10:01.018 response: 00:10:01.018 { 00:10:01.018 "code": -32602, 00:10:01.018 "message": "Invalid cntlid range [65520-65519]" 00:10:01.018 }' 00:10:01.018 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:01.018 { 00:10:01.018 "nqn": "nqn.2016-06.io.spdk:cnode23255", 00:10:01.018 "min_cntlid": 65520, 00:10:01.018 "method": "nvmf_create_subsystem", 00:10:01.018 "req_id": 1 00:10:01.018 } 00:10:01.018 Got JSON-RPC error response 00:10:01.018 response: 00:10:01.018 { 00:10:01.018 "code": -32602, 00:10:01.018 "message": "Invalid cntlid range [65520-65519]" 00:10:01.018 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.018 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30399 -I 0 00:10:01.278 [2024-07-15 13:40:27.607247] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30399: invalid cntlid range [1-0] 00:10:01.278 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:01.278 { 00:10:01.278 "nqn": "nqn.2016-06.io.spdk:cnode30399", 00:10:01.278 "max_cntlid": 0, 00:10:01.278 "method": "nvmf_create_subsystem", 00:10:01.278 "req_id": 1 00:10:01.278 } 00:10:01.278 Got JSON-RPC error response 00:10:01.278 response: 00:10:01.278 { 00:10:01.278 "code": -32602, 00:10:01.278 "message": "Invalid cntlid range [1-0]" 00:10:01.278 }' 00:10:01.278 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:01.278 { 00:10:01.278 "nqn": "nqn.2016-06.io.spdk:cnode30399", 00:10:01.278 "max_cntlid": 0, 00:10:01.278 "method": "nvmf_create_subsystem", 00:10:01.278 "req_id": 1 00:10:01.278 } 00:10:01.278 Got JSON-RPC error response 00:10:01.278 response: 00:10:01.278 { 00:10:01.278 "code": -32602, 00:10:01.278 "message": "Invalid cntlid range [1-0]" 00:10:01.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.278 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode370 -I 65520 00:10:01.278 [2024-07-15 13:40:27.795911] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode370: invalid cntlid range [1-65520] 00:10:01.537 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:01.537 { 00:10:01.537 "nqn": "nqn.2016-06.io.spdk:cnode370", 00:10:01.537 "max_cntlid": 65520, 00:10:01.537 "method": "nvmf_create_subsystem", 00:10:01.537 "req_id": 1 00:10:01.537 } 00:10:01.537 Got JSON-RPC error response 00:10:01.537 response: 00:10:01.537 { 00:10:01.537 "code": -32602, 00:10:01.537 "message": "Invalid cntlid range [1-65520]" 00:10:01.537 }' 00:10:01.537 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:01.537 { 00:10:01.537 "nqn": "nqn.2016-06.io.spdk:cnode370", 00:10:01.537 "max_cntlid": 65520, 00:10:01.537 "method": "nvmf_create_subsystem", 00:10:01.537 "req_id": 1 00:10:01.537 } 00:10:01.537 Got JSON-RPC error response 00:10:01.537 response: 00:10:01.537 { 00:10:01.537 "code": -32602, 00:10:01.537 "message": "Invalid cntlid range [1-65520]" 00:10:01.537 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.537 13:40:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27629 -i 6 -I 5 00:10:01.537 [2024-07-15 13:40:27.992636] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27629: invalid cntlid range [6-5] 00:10:01.537 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:01.537 { 00:10:01.537 "nqn": "nqn.2016-06.io.spdk:cnode27629", 00:10:01.537 "min_cntlid": 6, 00:10:01.537 "max_cntlid": 5, 00:10:01.537 "method": "nvmf_create_subsystem", 00:10:01.537 "req_id": 1 00:10:01.537 } 00:10:01.537 Got JSON-RPC error response 00:10:01.537 response: 00:10:01.537 { 00:10:01.537 "code": -32602, 00:10:01.537 "message": "Invalid cntlid range [6-5]" 00:10:01.537 }' 00:10:01.537 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:01.537 { 00:10:01.537 "nqn": "nqn.2016-06.io.spdk:cnode27629", 00:10:01.537 "min_cntlid": 6, 00:10:01.537 "max_cntlid": 5, 00:10:01.537 "method": "nvmf_create_subsystem", 00:10:01.537 "req_id": 1 00:10:01.537 } 00:10:01.537 Got JSON-RPC error response 00:10:01.537 response: 00:10:01.537 { 00:10:01.537 "code": -32602, 00:10:01.537 "message": "Invalid cntlid range [6-5]" 00:10:01.537 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:01.537 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:01.797 { 00:10:01.797 "name": "foobar", 00:10:01.797 "method": "nvmf_delete_target", 00:10:01.797 "req_id": 1 00:10:01.797 } 00:10:01.797 Got JSON-RPC error response 00:10:01.797 response: 00:10:01.797 { 00:10:01.797 "code": -32602, 00:10:01.797 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:01.797 }' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:01.797 { 00:10:01.797 "name": "foobar", 00:10:01.797 "method": "nvmf_delete_target", 00:10:01.797 "req_id": 1 00:10:01.797 } 00:10:01.797 Got JSON-RPC error response 00:10:01.797 response: 00:10:01.797 { 00:10:01.797 "code": -32602, 00:10:01.797 "message": "The specified target doesn't exist, cannot delete it." 00:10:01.797 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:01.797 rmmod nvme_rdma 00:10:01.797 rmmod nvme_fabrics 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2403366 ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2403366 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2403366 ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2403366 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2403366 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2403366' 00:10:01.797 killing process with pid 2403366 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2403366 00:10:01.797 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2403366 00:10:02.056 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.057 13:40:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:02.057 00:10:02.057 real 0m11.534s 00:10:02.057 user 0m21.393s 00:10:02.057 sys 0m6.479s 00:10:02.057 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.057 13:40:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:02.057 ************************************ 00:10:02.057 END TEST nvmf_invalid 00:10:02.057 ************************************ 00:10:02.057 13:40:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:02.057 13:40:28 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:02.057 13:40:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:02.057 13:40:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.057 13:40:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:02.316 ************************************ 00:10:02.317 START TEST nvmf_abort 00:10:02.317 ************************************ 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:02.317 * Looking for test storage... 00:10:02.317 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.317 13:40:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.886 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:08.887 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:08.887 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:08.887 Found net devices under 0000:18:00.0: mlx_0_0 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:08.887 Found net devices under 0000:18:00.1: mlx_0_1 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:08.887 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.146 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:09.146 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:09.147 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.147 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:09.147 altname enp24s0f0np0 00:10:09.147 altname ens785f0np0 00:10:09.147 inet 192.168.100.8/24 scope global mlx_0_0 00:10:09.147 valid_lft forever preferred_lft forever 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:09.147 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.147 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:09.147 altname enp24s0f1np1 00:10:09.147 altname ens785f1np1 00:10:09.147 inet 192.168.100.9/24 scope global mlx_0_1 00:10:09.147 valid_lft forever preferred_lft forever 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:09.147 192.168.100.9' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:09.147 192.168.100.9' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:09.147 192.168.100.9' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2407103 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2407103 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2407103 ']' 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.147 13:40:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.147 [2024-07-15 13:40:35.621324] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:09.147 [2024-07-15 13:40:35.621382] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.147 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.405 [2024-07-15 13:40:35.707163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.405 [2024-07-15 13:40:35.795073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.405 [2024-07-15 13:40:35.795119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.405 [2024-07-15 13:40:35.795128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.405 [2024-07-15 13:40:35.795152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.405 [2024-07-15 13:40:35.795159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.405 [2024-07-15 13:40:35.795282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.405 [2024-07-15 13:40:35.795386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.405 [2024-07-15 13:40:35.795385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:09.973 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.974 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 [2024-07-15 13:40:36.521754] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9f4a80/0x9f8f70) succeed. 00:10:10.231 [2024-07-15 13:40:36.531261] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f6020/0xa3a600) succeed. 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 Malloc0 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 Delay0 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 [2024-07-15 13:40:36.693517] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.231 13:40:36 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:10.231 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.490 [2024-07-15 13:40:36.799162] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:12.539 Initializing NVMe Controllers 00:10:12.539 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:12.539 controller IO queue size 128 less than required 00:10:12.539 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:12.539 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:12.539 Initialization complete. Launching workers. 00:10:12.539 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 49541 00:10:12.539 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 49602, failed to submit 62 00:10:12.539 success 49542, unsuccess 60, failed 0 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:12.539 rmmod nvme_rdma 00:10:12.539 rmmod nvme_fabrics 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2407103 ']' 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2407103 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2407103 ']' 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2407103 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.539 13:40:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2407103 00:10:12.539 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:12.539 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:12.539 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2407103' 00:10:12.539 killing process with pid 2407103 00:10:12.539 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2407103 00:10:12.539 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2407103 00:10:12.798 13:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.798 13:40:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:12.798 00:10:12.798 real 0m10.704s 00:10:12.798 user 0m14.505s 00:10:12.798 sys 0m5.788s 00:10:12.798 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.798 13:40:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.798 ************************************ 00:10:12.798 END TEST nvmf_abort 00:10:12.798 ************************************ 00:10:13.056 13:40:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:13.056 13:40:39 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:13.056 13:40:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:13.056 13:40:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.056 13:40:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:13.056 ************************************ 00:10:13.056 START TEST nvmf_ns_hotplug_stress 00:10:13.056 ************************************ 00:10:13.056 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:13.057 * Looking for test storage... 00:10:13.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.057 13:40:39 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:19.629 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:19.629 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.629 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:19.629 Found net devices under 0000:18:00.0: mlx_0_0 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:19.630 Found net devices under 0000:18:00.1: mlx_0_1 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.630 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:19.889 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.889 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:10:19.889 altname enp24s0f0np0 00:10:19.889 altname ens785f0np0 00:10:19.889 inet 192.168.100.8/24 scope global mlx_0_0 00:10:19.889 valid_lft forever preferred_lft forever 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:19.889 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.889 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:10:19.889 altname enp24s0f1np1 00:10:19.889 altname ens785f1np1 00:10:19.889 inet 192.168.100.9/24 scope global mlx_0_1 00:10:19.889 valid_lft forever preferred_lft forever 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:19.889 192.168.100.9' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:19.889 192.168.100.9' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:19.889 192.168.100.9' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2410546 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2410546 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2410546 ']' 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.889 13:40:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.889 [2024-07-15 13:40:46.401438] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:10:19.889 [2024-07-15 13:40:46.401501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.148 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.148 [2024-07-15 13:40:46.488398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.148 [2024-07-15 13:40:46.575535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.148 [2024-07-15 13:40:46.575582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.148 [2024-07-15 13:40:46.575592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.148 [2024-07-15 13:40:46.575600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.148 [2024-07-15 13:40:46.575607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.148 [2024-07-15 13:40:46.575720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.148 [2024-07-15 13:40:46.575825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.148 [2024-07-15 13:40:46.575824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.716 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.716 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:20.716 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.716 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.716 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.975 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.975 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:20.975 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:20.975 [2024-07-15 13:40:47.449848] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1441a80/0x1445f70) succeed. 00:10:20.975 [2024-07-15 13:40:47.459318] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1443020/0x1487600) succeed. 00:10:21.234 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.493 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:21.493 [2024-07-15 13:40:47.954910] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:21.493 13:40:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:21.751 13:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:22.011 Malloc0 00:10:22.011 13:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.011 Delay0 00:10:22.011 13:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.271 13:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:22.529 NULL1 00:10:22.529 13:40:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:22.788 13:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2410931 00:10:22.788 13:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:22.788 13:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:22.788 13:40:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.788 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.166 Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 13:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.166 13:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:24.166 13:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:24.166 true 00:10:24.166 13:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:24.166 13:40:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.102 13:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.360 13:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:25.360 13:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:25.360 true 00:10:25.360 13:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:25.360 13:40:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 13:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.558 13:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:26.558 13:40:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:26.558 true 00:10:26.558 13:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:26.558 13:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 13:40:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.754 13:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:27.754 13:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:27.754 true 00:10:27.754 13:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:27.754 13:40:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 13:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.951 13:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:28.951 13:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:28.951 true 00:10:28.951 13:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:28.951 13:40:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.888 13:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.147 13:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:30.147 13:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:30.147 true 00:10:30.147 13:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:30.147 13:40:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.085 13:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.344 13:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:31.344 13:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:31.344 true 00:10:31.344 13:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:31.344 13:40:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.281 13:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.540 13:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:32.540 13:40:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:32.540 true 00:10:32.540 13:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:32.540 13:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.477 13:40:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.736 13:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:33.736 13:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:33.736 true 00:10:33.736 13:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:33.736 13:41:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.672 13:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.931 13:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:34.931 13:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:34.931 true 00:10:35.188 13:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:35.188 13:41:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 13:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.012 13:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:36.012 13:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:36.270 true 00:10:36.270 13:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:36.270 13:41:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 13:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.201 13:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:37.201 13:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:37.458 true 00:10:37.458 13:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:37.458 13:41:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 13:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.393 13:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:38.393 13:41:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:38.650 true 00:10:38.650 13:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:38.650 13:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 13:41:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.585 13:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:39.585 13:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:39.843 true 00:10:39.843 13:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:39.843 13:41:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 13:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.780 13:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:40.780 13:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:41.038 true 00:10:41.038 13:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:41.038 13:41:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 13:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.974 13:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:41.974 13:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:42.232 true 00:10:42.232 13:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:42.232 13:41:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.167 13:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.167 13:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:43.167 13:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:43.425 true 00:10:43.425 13:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:43.425 13:41:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.691 13:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.948 13:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:43.948 13:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:43.948 true 00:10:43.948 13:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:43.948 13:41:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 13:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.323 13:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:45.323 13:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:45.581 true 00:10:45.581 13:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:45.581 13:41:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 13:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.519 13:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:46.519 13:41:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:46.777 true 00:10:46.777 13:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:46.777 13:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 13:41:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.710 13:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:47.710 13:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:47.968 true 00:10:47.968 13:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:47.968 13:41:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 13:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.904 13:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:48.904 13:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:49.162 true 00:10:49.162 13:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:49.162 13:41:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 13:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 13:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:50.096 13:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:50.353 true 00:10:50.353 13:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:50.353 13:41:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.286 13:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.286 13:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:51.286 13:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:51.544 true 00:10:51.544 13:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:51.544 13:41:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 13:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.478 13:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:52.478 13:41:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:52.738 true 00:10:52.738 13:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:52.738 13:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.802 13:41:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.802 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:53.802 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:53.802 true 00:10:54.061 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:54.061 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.061 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.319 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:54.319 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:54.576 true 00:10:54.576 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:54.576 13:41:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.576 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.834 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:54.834 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:55.092 true 00:10:55.092 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:55.092 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.350 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.350 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:55.350 13:41:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:55.608 Initializing NVMe Controllers 00:10:55.608 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.608 Controller IO queue size 128, less than required. 00:10:55.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.608 Controller IO queue size 128, less than required. 00:10:55.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.608 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:55.608 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:55.608 Initialization complete. Launching workers. 00:10:55.608 ======================================================== 00:10:55.608 Latency(us) 00:10:55.608 Device Information : IOPS MiB/s Average min max 00:10:55.608 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5725.20 2.80 19930.37 930.45 1138456.89 00:10:55.608 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33419.13 16.32 3830.03 2264.12 293787.60 00:10:55.608 ======================================================== 00:10:55.608 Total : 39144.33 19.11 6184.85 930.45 1138456.89 00:10:55.608 00:10:55.608 true 00:10:55.608 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2410931 00:10:55.608 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2410931) - No such process 00:10:55.608 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2410931 00:10:55.608 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.866 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:56.124 null0 00:10:56.124 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.124 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.124 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:56.382 null1 00:10:56.382 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.382 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.382 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:56.639 null2 00:10:56.639 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.639 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.639 13:41:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:56.639 null3 00:10:56.639 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.639 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.639 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:56.903 null4 00:10:56.903 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.903 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.903 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:57.169 null5 00:10:57.169 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.169 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.169 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:57.169 null6 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:57.427 null7 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.427 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2415590 2415591 2415593 2415596 2415599 2415601 2415603 2415605 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.428 13:41:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.688 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.947 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.207 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.466 13:41:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.725 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.986 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.246 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.505 13:41:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.505 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.505 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.505 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.764 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.024 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.284 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.544 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.804 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.064 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:01.323 rmmod nvme_rdma 00:11:01.323 rmmod nvme_fabrics 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2410546 ']' 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2410546 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2410546 ']' 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2410546 00:11:01.323 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2410546 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2410546' 00:11:01.583 killing process with pid 2410546 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2410546 00:11:01.583 13:41:27 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2410546 00:11:01.842 13:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.842 13:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:01.842 00:11:01.842 real 0m48.783s 00:11:01.842 user 3m18.805s 00:11:01.842 sys 0m14.876s 00:11:01.842 13:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.842 13:41:28 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.842 ************************************ 00:11:01.842 END TEST nvmf_ns_hotplug_stress 00:11:01.842 ************************************ 00:11:01.842 13:41:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:01.842 13:41:28 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:01.842 13:41:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.842 13:41:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.842 13:41:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:01.842 ************************************ 00:11:01.842 START TEST nvmf_connect_stress 00:11:01.842 ************************************ 00:11:01.842 13:41:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:01.842 * Looking for test storage... 00:11:01.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:01.842 13:41:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.842 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:01.842 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.842 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.843 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.102 13:41:28 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.675 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:08.676 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:08.676 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:08.676 Found net devices under 0000:18:00.0: mlx_0_0 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:08.676 Found net devices under 0000:18:00.1: mlx_0_1 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:08.676 13:41:34 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:08.676 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.676 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:08.676 altname enp24s0f0np0 00:11:08.676 altname ens785f0np0 00:11:08.676 inet 192.168.100.8/24 scope global mlx_0_0 00:11:08.676 valid_lft forever preferred_lft forever 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:08.676 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.676 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:08.676 altname enp24s0f1np1 00:11:08.676 altname ens785f1np1 00:11:08.676 inet 192.168.100.9/24 scope global mlx_0_1 00:11:08.676 valid_lft forever preferred_lft forever 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:08.676 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.677 192.168.100.9' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:08.677 192.168.100.9' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:08.677 192.168.100.9' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2419204 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2419204 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2419204 ']' 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.677 13:41:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.937 [2024-07-15 13:41:35.226407] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:08.937 [2024-07-15 13:41:35.226470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.937 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.937 [2024-07-15 13:41:35.311223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.937 [2024-07-15 13:41:35.392706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.937 [2024-07-15 13:41:35.392750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.937 [2024-07-15 13:41:35.392759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.937 [2024-07-15 13:41:35.392768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.937 [2024-07-15 13:41:35.392775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.937 [2024-07-15 13:41:35.392890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.937 [2024-07-15 13:41:35.393002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.937 [2024-07-15 13:41:35.393002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.875 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.875 [2024-07-15 13:41:36.105816] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18cea80/0x18d2f70) succeed. 00:11:09.876 [2024-07-15 13:41:36.115118] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18d0020/0x1914600) succeed. 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.876 [2024-07-15 13:41:36.231245] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.876 NULL1 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2419404 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.876 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.444 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.444 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:10.444 13:41:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.444 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.444 13:41:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.703 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.703 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:10.703 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.703 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.703 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.961 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.961 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:10.961 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.961 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.961 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.220 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.220 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:11.220 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.220 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.220 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.479 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.479 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:11.479 13:41:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.479 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.479 13:41:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.048 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.048 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:12.048 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.048 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.048 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.306 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.306 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:12.306 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.306 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.306 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.565 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.565 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:12.565 13:41:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.565 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.565 13:41:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.830 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.830 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:12.830 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.830 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.830 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.150 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.150 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:13.150 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.150 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.150 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.715 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.715 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:13.715 13:41:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.715 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.715 13:41:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.973 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.973 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:13.973 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.973 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.973 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.231 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.231 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:14.231 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.231 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.231 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.489 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.489 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:14.489 13:41:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.489 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.489 13:41:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.748 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.748 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:14.748 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.748 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.748 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.314 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.314 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:15.314 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.314 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.314 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.572 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.572 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:15.572 13:41:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.572 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.572 13:41:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.831 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.831 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:15.831 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.831 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.831 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.090 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.090 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:16.090 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.090 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.090 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.348 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.348 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:16.348 13:41:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.348 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.348 13:41:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.915 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.915 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:16.915 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.915 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.915 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.174 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.174 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:17.174 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.174 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.174 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.432 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.432 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:17.432 13:41:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.432 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.432 13:41:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.692 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.692 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:17.692 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.692 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.692 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.950 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.950 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:17.950 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.950 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.950 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.518 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:18.518 13:41:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.518 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.518 13:41:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.777 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.777 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:18.777 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.777 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.777 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.036 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.036 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:19.036 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.036 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.036 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.294 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.294 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:19.294 13:41:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.294 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.294 13:41:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.861 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.862 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:19.862 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.862 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.862 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.862 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2419404 00:11:20.120 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2419404) - No such process 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2419404 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:20.120 rmmod nvme_rdma 00:11:20.120 rmmod nvme_fabrics 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2419204 ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2419204 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2419204 ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2419204 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2419204 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2419204' 00:11:20.120 killing process with pid 2419204 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2419204 00:11:20.120 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2419204 00:11:20.378 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.378 13:41:46 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:20.378 00:11:20.378 real 0m18.524s 00:11:20.378 user 0m41.524s 00:11:20.378 sys 0m7.741s 00:11:20.378 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.378 13:41:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.378 ************************************ 00:11:20.378 END TEST nvmf_connect_stress 00:11:20.378 ************************************ 00:11:20.378 13:41:46 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:20.378 13:41:46 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:20.378 13:41:46 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.378 13:41:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.378 13:41:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:20.378 ************************************ 00:11:20.378 START TEST nvmf_fused_ordering 00:11:20.378 ************************************ 00:11:20.378 13:41:46 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:20.637 * Looking for test storage... 00:11:20.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.637 13:41:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.637 13:41:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:27.214 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:27.214 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:27.214 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:27.215 Found net devices under 0000:18:00.0: mlx_0_0 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:27.215 Found net devices under 0000:18:00.1: mlx_0_1 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.215 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:27.475 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.475 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:27.475 altname enp24s0f0np0 00:11:27.475 altname ens785f0np0 00:11:27.475 inet 192.168.100.8/24 scope global mlx_0_0 00:11:27.475 valid_lft forever preferred_lft forever 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:27.475 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:27.475 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:27.475 altname enp24s0f1np1 00:11:27.475 altname ens785f1np1 00:11:27.475 inet 192.168.100.9/24 scope global mlx_0_1 00:11:27.475 valid_lft forever preferred_lft forever 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:27.475 192.168.100.9' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:27.475 192.168.100.9' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:27.475 192.168.100.9' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2423768 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2423768 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2423768 ']' 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.475 13:41:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:27.475 [2024-07-15 13:41:53.949443] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:27.475 [2024-07-15 13:41:53.949505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.475 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.734 [2024-07-15 13:41:54.037699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.734 [2024-07-15 13:41:54.130037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.734 [2024-07-15 13:41:54.130076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.734 [2024-07-15 13:41:54.130086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.734 [2024-07-15 13:41:54.130095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.734 [2024-07-15 13:41:54.130102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.734 [2024-07-15 13:41:54.130122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.302 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 [2024-07-15 13:41:54.832995] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9f4260/0x9f8750) succeed. 00:11:28.561 [2024-07-15 13:41:54.841832] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f5760/0xa39de0) succeed. 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 [2024-07-15 13:41:54.911749] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 NULL1 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.561 13:41:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:28.561 [2024-07-15 13:41:54.974539] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:28.561 [2024-07-15 13:41:54.974595] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423820 ] 00:11:28.561 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.821 Attached to nqn.2016-06.io.spdk:cnode1 00:11:28.821 Namespace ID: 1 size: 1GB 00:11:28.821 fused_ordering(0) 00:11:28.821 fused_ordering(1) 00:11:28.821 fused_ordering(2) 00:11:28.821 fused_ordering(3) 00:11:28.821 fused_ordering(4) 00:11:28.821 fused_ordering(5) 00:11:28.821 fused_ordering(6) 00:11:28.821 fused_ordering(7) 00:11:28.821 fused_ordering(8) 00:11:28.821 fused_ordering(9) 00:11:28.821 fused_ordering(10) 00:11:28.821 fused_ordering(11) 00:11:28.821 fused_ordering(12) 00:11:28.821 fused_ordering(13) 00:11:28.821 fused_ordering(14) 00:11:28.821 fused_ordering(15) 00:11:28.821 fused_ordering(16) 00:11:28.821 fused_ordering(17) 00:11:28.821 fused_ordering(18) 00:11:28.821 fused_ordering(19) 00:11:28.821 fused_ordering(20) 00:11:28.821 fused_ordering(21) 00:11:28.821 fused_ordering(22) 00:11:28.821 fused_ordering(23) 00:11:28.821 fused_ordering(24) 00:11:28.821 fused_ordering(25) 00:11:28.821 fused_ordering(26) 00:11:28.821 fused_ordering(27) 00:11:28.821 fused_ordering(28) 00:11:28.821 fused_ordering(29) 00:11:28.821 fused_ordering(30) 00:11:28.821 fused_ordering(31) 00:11:28.821 fused_ordering(32) 00:11:28.821 fused_ordering(33) 00:11:28.821 fused_ordering(34) 00:11:28.821 fused_ordering(35) 00:11:28.821 fused_ordering(36) 00:11:28.821 fused_ordering(37) 00:11:28.821 fused_ordering(38) 00:11:28.821 fused_ordering(39) 00:11:28.821 fused_ordering(40) 00:11:28.821 fused_ordering(41) 00:11:28.821 fused_ordering(42) 00:11:28.821 fused_ordering(43) 00:11:28.821 fused_ordering(44) 00:11:28.821 fused_ordering(45) 00:11:28.821 fused_ordering(46) 00:11:28.821 fused_ordering(47) 00:11:28.821 fused_ordering(48) 00:11:28.821 fused_ordering(49) 00:11:28.821 fused_ordering(50) 00:11:28.821 fused_ordering(51) 00:11:28.821 fused_ordering(52) 00:11:28.821 fused_ordering(53) 00:11:28.821 fused_ordering(54) 00:11:28.821 fused_ordering(55) 00:11:28.821 fused_ordering(56) 00:11:28.821 fused_ordering(57) 00:11:28.821 fused_ordering(58) 00:11:28.821 fused_ordering(59) 00:11:28.821 fused_ordering(60) 00:11:28.821 fused_ordering(61) 00:11:28.822 fused_ordering(62) 00:11:28.822 fused_ordering(63) 00:11:28.822 fused_ordering(64) 00:11:28.822 fused_ordering(65) 00:11:28.822 fused_ordering(66) 00:11:28.822 fused_ordering(67) 00:11:28.822 fused_ordering(68) 00:11:28.822 fused_ordering(69) 00:11:28.822 fused_ordering(70) 00:11:28.822 fused_ordering(71) 00:11:28.822 fused_ordering(72) 00:11:28.822 fused_ordering(73) 00:11:28.822 fused_ordering(74) 00:11:28.822 fused_ordering(75) 00:11:28.822 fused_ordering(76) 00:11:28.822 fused_ordering(77) 00:11:28.822 fused_ordering(78) 00:11:28.822 fused_ordering(79) 00:11:28.822 fused_ordering(80) 00:11:28.822 fused_ordering(81) 00:11:28.822 fused_ordering(82) 00:11:28.822 fused_ordering(83) 00:11:28.822 fused_ordering(84) 00:11:28.822 fused_ordering(85) 00:11:28.822 fused_ordering(86) 00:11:28.822 fused_ordering(87) 00:11:28.822 fused_ordering(88) 00:11:28.822 fused_ordering(89) 00:11:28.822 fused_ordering(90) 00:11:28.822 fused_ordering(91) 00:11:28.822 fused_ordering(92) 00:11:28.822 fused_ordering(93) 00:11:28.822 fused_ordering(94) 00:11:28.822 fused_ordering(95) 00:11:28.822 fused_ordering(96) 00:11:28.822 fused_ordering(97) 00:11:28.822 fused_ordering(98) 00:11:28.822 fused_ordering(99) 00:11:28.822 fused_ordering(100) 00:11:28.822 fused_ordering(101) 00:11:28.822 fused_ordering(102) 00:11:28.822 fused_ordering(103) 00:11:28.822 fused_ordering(104) 00:11:28.822 fused_ordering(105) 00:11:28.822 fused_ordering(106) 00:11:28.822 fused_ordering(107) 00:11:28.822 fused_ordering(108) 00:11:28.822 fused_ordering(109) 00:11:28.822 fused_ordering(110) 00:11:28.822 fused_ordering(111) 00:11:28.822 fused_ordering(112) 00:11:28.822 fused_ordering(113) 00:11:28.822 fused_ordering(114) 00:11:28.822 fused_ordering(115) 00:11:28.822 fused_ordering(116) 00:11:28.822 fused_ordering(117) 00:11:28.822 fused_ordering(118) 00:11:28.822 fused_ordering(119) 00:11:28.822 fused_ordering(120) 00:11:28.822 fused_ordering(121) 00:11:28.822 fused_ordering(122) 00:11:28.822 fused_ordering(123) 00:11:28.822 fused_ordering(124) 00:11:28.822 fused_ordering(125) 00:11:28.822 fused_ordering(126) 00:11:28.822 fused_ordering(127) 00:11:28.822 fused_ordering(128) 00:11:28.822 fused_ordering(129) 00:11:28.822 fused_ordering(130) 00:11:28.822 fused_ordering(131) 00:11:28.822 fused_ordering(132) 00:11:28.822 fused_ordering(133) 00:11:28.822 fused_ordering(134) 00:11:28.822 fused_ordering(135) 00:11:28.822 fused_ordering(136) 00:11:28.822 fused_ordering(137) 00:11:28.822 fused_ordering(138) 00:11:28.822 fused_ordering(139) 00:11:28.822 fused_ordering(140) 00:11:28.822 fused_ordering(141) 00:11:28.822 fused_ordering(142) 00:11:28.822 fused_ordering(143) 00:11:28.822 fused_ordering(144) 00:11:28.822 fused_ordering(145) 00:11:28.822 fused_ordering(146) 00:11:28.822 fused_ordering(147) 00:11:28.822 fused_ordering(148) 00:11:28.822 fused_ordering(149) 00:11:28.822 fused_ordering(150) 00:11:28.822 fused_ordering(151) 00:11:28.822 fused_ordering(152) 00:11:28.822 fused_ordering(153) 00:11:28.822 fused_ordering(154) 00:11:28.822 fused_ordering(155) 00:11:28.822 fused_ordering(156) 00:11:28.822 fused_ordering(157) 00:11:28.822 fused_ordering(158) 00:11:28.822 fused_ordering(159) 00:11:28.822 fused_ordering(160) 00:11:28.822 fused_ordering(161) 00:11:28.822 fused_ordering(162) 00:11:28.822 fused_ordering(163) 00:11:28.822 fused_ordering(164) 00:11:28.822 fused_ordering(165) 00:11:28.822 fused_ordering(166) 00:11:28.822 fused_ordering(167) 00:11:28.822 fused_ordering(168) 00:11:28.822 fused_ordering(169) 00:11:28.822 fused_ordering(170) 00:11:28.822 fused_ordering(171) 00:11:28.822 fused_ordering(172) 00:11:28.822 fused_ordering(173) 00:11:28.822 fused_ordering(174) 00:11:28.822 fused_ordering(175) 00:11:28.822 fused_ordering(176) 00:11:28.822 fused_ordering(177) 00:11:28.822 fused_ordering(178) 00:11:28.822 fused_ordering(179) 00:11:28.822 fused_ordering(180) 00:11:28.822 fused_ordering(181) 00:11:28.822 fused_ordering(182) 00:11:28.822 fused_ordering(183) 00:11:28.822 fused_ordering(184) 00:11:28.822 fused_ordering(185) 00:11:28.822 fused_ordering(186) 00:11:28.822 fused_ordering(187) 00:11:28.822 fused_ordering(188) 00:11:28.822 fused_ordering(189) 00:11:28.822 fused_ordering(190) 00:11:28.822 fused_ordering(191) 00:11:28.822 fused_ordering(192) 00:11:28.822 fused_ordering(193) 00:11:28.822 fused_ordering(194) 00:11:28.822 fused_ordering(195) 00:11:28.822 fused_ordering(196) 00:11:28.822 fused_ordering(197) 00:11:28.822 fused_ordering(198) 00:11:28.822 fused_ordering(199) 00:11:28.822 fused_ordering(200) 00:11:28.822 fused_ordering(201) 00:11:28.822 fused_ordering(202) 00:11:28.822 fused_ordering(203) 00:11:28.822 fused_ordering(204) 00:11:28.822 fused_ordering(205) 00:11:28.822 fused_ordering(206) 00:11:28.822 fused_ordering(207) 00:11:28.822 fused_ordering(208) 00:11:28.822 fused_ordering(209) 00:11:28.822 fused_ordering(210) 00:11:28.822 fused_ordering(211) 00:11:28.822 fused_ordering(212) 00:11:28.822 fused_ordering(213) 00:11:28.822 fused_ordering(214) 00:11:28.822 fused_ordering(215) 00:11:28.822 fused_ordering(216) 00:11:28.822 fused_ordering(217) 00:11:28.822 fused_ordering(218) 00:11:28.822 fused_ordering(219) 00:11:28.822 fused_ordering(220) 00:11:28.822 fused_ordering(221) 00:11:28.822 fused_ordering(222) 00:11:28.822 fused_ordering(223) 00:11:28.822 fused_ordering(224) 00:11:28.822 fused_ordering(225) 00:11:28.822 fused_ordering(226) 00:11:28.822 fused_ordering(227) 00:11:28.822 fused_ordering(228) 00:11:28.822 fused_ordering(229) 00:11:28.822 fused_ordering(230) 00:11:28.822 fused_ordering(231) 00:11:28.822 fused_ordering(232) 00:11:28.822 fused_ordering(233) 00:11:28.822 fused_ordering(234) 00:11:28.822 fused_ordering(235) 00:11:28.822 fused_ordering(236) 00:11:28.822 fused_ordering(237) 00:11:28.822 fused_ordering(238) 00:11:28.822 fused_ordering(239) 00:11:28.822 fused_ordering(240) 00:11:28.822 fused_ordering(241) 00:11:28.822 fused_ordering(242) 00:11:28.822 fused_ordering(243) 00:11:28.822 fused_ordering(244) 00:11:28.822 fused_ordering(245) 00:11:28.822 fused_ordering(246) 00:11:28.822 fused_ordering(247) 00:11:28.822 fused_ordering(248) 00:11:28.822 fused_ordering(249) 00:11:28.822 fused_ordering(250) 00:11:28.822 fused_ordering(251) 00:11:28.822 fused_ordering(252) 00:11:28.822 fused_ordering(253) 00:11:28.822 fused_ordering(254) 00:11:28.822 fused_ordering(255) 00:11:28.822 fused_ordering(256) 00:11:28.822 fused_ordering(257) 00:11:28.822 fused_ordering(258) 00:11:28.822 fused_ordering(259) 00:11:28.822 fused_ordering(260) 00:11:28.822 fused_ordering(261) 00:11:28.822 fused_ordering(262) 00:11:28.822 fused_ordering(263) 00:11:28.822 fused_ordering(264) 00:11:28.822 fused_ordering(265) 00:11:28.822 fused_ordering(266) 00:11:28.822 fused_ordering(267) 00:11:28.822 fused_ordering(268) 00:11:28.822 fused_ordering(269) 00:11:28.822 fused_ordering(270) 00:11:28.822 fused_ordering(271) 00:11:28.822 fused_ordering(272) 00:11:28.822 fused_ordering(273) 00:11:28.822 fused_ordering(274) 00:11:28.822 fused_ordering(275) 00:11:28.822 fused_ordering(276) 00:11:28.822 fused_ordering(277) 00:11:28.822 fused_ordering(278) 00:11:28.822 fused_ordering(279) 00:11:28.822 fused_ordering(280) 00:11:28.822 fused_ordering(281) 00:11:28.822 fused_ordering(282) 00:11:28.822 fused_ordering(283) 00:11:28.822 fused_ordering(284) 00:11:28.822 fused_ordering(285) 00:11:28.822 fused_ordering(286) 00:11:28.822 fused_ordering(287) 00:11:28.822 fused_ordering(288) 00:11:28.822 fused_ordering(289) 00:11:28.822 fused_ordering(290) 00:11:28.822 fused_ordering(291) 00:11:28.822 fused_ordering(292) 00:11:28.822 fused_ordering(293) 00:11:28.822 fused_ordering(294) 00:11:28.822 fused_ordering(295) 00:11:28.822 fused_ordering(296) 00:11:28.822 fused_ordering(297) 00:11:28.822 fused_ordering(298) 00:11:28.822 fused_ordering(299) 00:11:28.822 fused_ordering(300) 00:11:28.822 fused_ordering(301) 00:11:28.822 fused_ordering(302) 00:11:28.822 fused_ordering(303) 00:11:28.822 fused_ordering(304) 00:11:28.822 fused_ordering(305) 00:11:28.822 fused_ordering(306) 00:11:28.822 fused_ordering(307) 00:11:28.822 fused_ordering(308) 00:11:28.822 fused_ordering(309) 00:11:28.822 fused_ordering(310) 00:11:28.822 fused_ordering(311) 00:11:28.822 fused_ordering(312) 00:11:28.822 fused_ordering(313) 00:11:28.822 fused_ordering(314) 00:11:28.822 fused_ordering(315) 00:11:28.822 fused_ordering(316) 00:11:28.822 fused_ordering(317) 00:11:28.822 fused_ordering(318) 00:11:28.822 fused_ordering(319) 00:11:28.822 fused_ordering(320) 00:11:28.822 fused_ordering(321) 00:11:28.822 fused_ordering(322) 00:11:28.822 fused_ordering(323) 00:11:28.822 fused_ordering(324) 00:11:28.822 fused_ordering(325) 00:11:28.822 fused_ordering(326) 00:11:28.822 fused_ordering(327) 00:11:28.822 fused_ordering(328) 00:11:28.822 fused_ordering(329) 00:11:28.822 fused_ordering(330) 00:11:28.822 fused_ordering(331) 00:11:28.822 fused_ordering(332) 00:11:28.822 fused_ordering(333) 00:11:28.822 fused_ordering(334) 00:11:28.822 fused_ordering(335) 00:11:28.822 fused_ordering(336) 00:11:28.822 fused_ordering(337) 00:11:28.822 fused_ordering(338) 00:11:28.822 fused_ordering(339) 00:11:28.822 fused_ordering(340) 00:11:28.822 fused_ordering(341) 00:11:28.822 fused_ordering(342) 00:11:28.822 fused_ordering(343) 00:11:28.822 fused_ordering(344) 00:11:28.822 fused_ordering(345) 00:11:28.822 fused_ordering(346) 00:11:28.822 fused_ordering(347) 00:11:28.822 fused_ordering(348) 00:11:28.823 fused_ordering(349) 00:11:28.823 fused_ordering(350) 00:11:28.823 fused_ordering(351) 00:11:28.823 fused_ordering(352) 00:11:28.823 fused_ordering(353) 00:11:28.823 fused_ordering(354) 00:11:28.823 fused_ordering(355) 00:11:28.823 fused_ordering(356) 00:11:28.823 fused_ordering(357) 00:11:28.823 fused_ordering(358) 00:11:28.823 fused_ordering(359) 00:11:28.823 fused_ordering(360) 00:11:28.823 fused_ordering(361) 00:11:28.823 fused_ordering(362) 00:11:28.823 fused_ordering(363) 00:11:28.823 fused_ordering(364) 00:11:28.823 fused_ordering(365) 00:11:28.823 fused_ordering(366) 00:11:28.823 fused_ordering(367) 00:11:28.823 fused_ordering(368) 00:11:28.823 fused_ordering(369) 00:11:28.823 fused_ordering(370) 00:11:28.823 fused_ordering(371) 00:11:28.823 fused_ordering(372) 00:11:28.823 fused_ordering(373) 00:11:28.823 fused_ordering(374) 00:11:28.823 fused_ordering(375) 00:11:28.823 fused_ordering(376) 00:11:28.823 fused_ordering(377) 00:11:28.823 fused_ordering(378) 00:11:28.823 fused_ordering(379) 00:11:28.823 fused_ordering(380) 00:11:28.823 fused_ordering(381) 00:11:28.823 fused_ordering(382) 00:11:28.823 fused_ordering(383) 00:11:28.823 fused_ordering(384) 00:11:28.823 fused_ordering(385) 00:11:28.823 fused_ordering(386) 00:11:28.823 fused_ordering(387) 00:11:28.823 fused_ordering(388) 00:11:28.823 fused_ordering(389) 00:11:28.823 fused_ordering(390) 00:11:28.823 fused_ordering(391) 00:11:28.823 fused_ordering(392) 00:11:28.823 fused_ordering(393) 00:11:28.823 fused_ordering(394) 00:11:28.823 fused_ordering(395) 00:11:28.823 fused_ordering(396) 00:11:28.823 fused_ordering(397) 00:11:28.823 fused_ordering(398) 00:11:28.823 fused_ordering(399) 00:11:28.823 fused_ordering(400) 00:11:28.823 fused_ordering(401) 00:11:28.823 fused_ordering(402) 00:11:28.823 fused_ordering(403) 00:11:28.823 fused_ordering(404) 00:11:28.823 fused_ordering(405) 00:11:28.823 fused_ordering(406) 00:11:28.823 fused_ordering(407) 00:11:28.823 fused_ordering(408) 00:11:28.823 fused_ordering(409) 00:11:28.823 fused_ordering(410) 00:11:29.082 fused_ordering(411) 00:11:29.082 fused_ordering(412) 00:11:29.082 fused_ordering(413) 00:11:29.082 fused_ordering(414) 00:11:29.082 fused_ordering(415) 00:11:29.082 fused_ordering(416) 00:11:29.082 fused_ordering(417) 00:11:29.082 fused_ordering(418) 00:11:29.082 fused_ordering(419) 00:11:29.082 fused_ordering(420) 00:11:29.082 fused_ordering(421) 00:11:29.082 fused_ordering(422) 00:11:29.082 fused_ordering(423) 00:11:29.082 fused_ordering(424) 00:11:29.082 fused_ordering(425) 00:11:29.082 fused_ordering(426) 00:11:29.082 fused_ordering(427) 00:11:29.082 fused_ordering(428) 00:11:29.082 fused_ordering(429) 00:11:29.082 fused_ordering(430) 00:11:29.082 fused_ordering(431) 00:11:29.082 fused_ordering(432) 00:11:29.082 fused_ordering(433) 00:11:29.082 fused_ordering(434) 00:11:29.082 fused_ordering(435) 00:11:29.082 fused_ordering(436) 00:11:29.082 fused_ordering(437) 00:11:29.082 fused_ordering(438) 00:11:29.082 fused_ordering(439) 00:11:29.082 fused_ordering(440) 00:11:29.082 fused_ordering(441) 00:11:29.082 fused_ordering(442) 00:11:29.082 fused_ordering(443) 00:11:29.082 fused_ordering(444) 00:11:29.082 fused_ordering(445) 00:11:29.082 fused_ordering(446) 00:11:29.082 fused_ordering(447) 00:11:29.082 fused_ordering(448) 00:11:29.082 fused_ordering(449) 00:11:29.082 fused_ordering(450) 00:11:29.082 fused_ordering(451) 00:11:29.082 fused_ordering(452) 00:11:29.083 fused_ordering(453) 00:11:29.083 fused_ordering(454) 00:11:29.083 fused_ordering(455) 00:11:29.083 fused_ordering(456) 00:11:29.083 fused_ordering(457) 00:11:29.083 fused_ordering(458) 00:11:29.083 fused_ordering(459) 00:11:29.083 fused_ordering(460) 00:11:29.083 fused_ordering(461) 00:11:29.083 fused_ordering(462) 00:11:29.083 fused_ordering(463) 00:11:29.083 fused_ordering(464) 00:11:29.083 fused_ordering(465) 00:11:29.083 fused_ordering(466) 00:11:29.083 fused_ordering(467) 00:11:29.083 fused_ordering(468) 00:11:29.083 fused_ordering(469) 00:11:29.083 fused_ordering(470) 00:11:29.083 fused_ordering(471) 00:11:29.083 fused_ordering(472) 00:11:29.083 fused_ordering(473) 00:11:29.083 fused_ordering(474) 00:11:29.083 fused_ordering(475) 00:11:29.083 fused_ordering(476) 00:11:29.083 fused_ordering(477) 00:11:29.083 fused_ordering(478) 00:11:29.083 fused_ordering(479) 00:11:29.083 fused_ordering(480) 00:11:29.083 fused_ordering(481) 00:11:29.083 fused_ordering(482) 00:11:29.083 fused_ordering(483) 00:11:29.083 fused_ordering(484) 00:11:29.083 fused_ordering(485) 00:11:29.083 fused_ordering(486) 00:11:29.083 fused_ordering(487) 00:11:29.083 fused_ordering(488) 00:11:29.083 fused_ordering(489) 00:11:29.083 fused_ordering(490) 00:11:29.083 fused_ordering(491) 00:11:29.083 fused_ordering(492) 00:11:29.083 fused_ordering(493) 00:11:29.083 fused_ordering(494) 00:11:29.083 fused_ordering(495) 00:11:29.083 fused_ordering(496) 00:11:29.083 fused_ordering(497) 00:11:29.083 fused_ordering(498) 00:11:29.083 fused_ordering(499) 00:11:29.083 fused_ordering(500) 00:11:29.083 fused_ordering(501) 00:11:29.083 fused_ordering(502) 00:11:29.083 fused_ordering(503) 00:11:29.083 fused_ordering(504) 00:11:29.083 fused_ordering(505) 00:11:29.083 fused_ordering(506) 00:11:29.083 fused_ordering(507) 00:11:29.083 fused_ordering(508) 00:11:29.083 fused_ordering(509) 00:11:29.083 fused_ordering(510) 00:11:29.083 fused_ordering(511) 00:11:29.083 fused_ordering(512) 00:11:29.083 fused_ordering(513) 00:11:29.083 fused_ordering(514) 00:11:29.083 fused_ordering(515) 00:11:29.083 fused_ordering(516) 00:11:29.083 fused_ordering(517) 00:11:29.083 fused_ordering(518) 00:11:29.083 fused_ordering(519) 00:11:29.083 fused_ordering(520) 00:11:29.083 fused_ordering(521) 00:11:29.083 fused_ordering(522) 00:11:29.083 fused_ordering(523) 00:11:29.083 fused_ordering(524) 00:11:29.083 fused_ordering(525) 00:11:29.083 fused_ordering(526) 00:11:29.083 fused_ordering(527) 00:11:29.083 fused_ordering(528) 00:11:29.083 fused_ordering(529) 00:11:29.083 fused_ordering(530) 00:11:29.083 fused_ordering(531) 00:11:29.083 fused_ordering(532) 00:11:29.083 fused_ordering(533) 00:11:29.083 fused_ordering(534) 00:11:29.083 fused_ordering(535) 00:11:29.083 fused_ordering(536) 00:11:29.083 fused_ordering(537) 00:11:29.083 fused_ordering(538) 00:11:29.083 fused_ordering(539) 00:11:29.083 fused_ordering(540) 00:11:29.083 fused_ordering(541) 00:11:29.083 fused_ordering(542) 00:11:29.083 fused_ordering(543) 00:11:29.083 fused_ordering(544) 00:11:29.083 fused_ordering(545) 00:11:29.083 fused_ordering(546) 00:11:29.083 fused_ordering(547) 00:11:29.083 fused_ordering(548) 00:11:29.083 fused_ordering(549) 00:11:29.083 fused_ordering(550) 00:11:29.083 fused_ordering(551) 00:11:29.083 fused_ordering(552) 00:11:29.083 fused_ordering(553) 00:11:29.083 fused_ordering(554) 00:11:29.083 fused_ordering(555) 00:11:29.083 fused_ordering(556) 00:11:29.083 fused_ordering(557) 00:11:29.083 fused_ordering(558) 00:11:29.083 fused_ordering(559) 00:11:29.083 fused_ordering(560) 00:11:29.083 fused_ordering(561) 00:11:29.083 fused_ordering(562) 00:11:29.083 fused_ordering(563) 00:11:29.083 fused_ordering(564) 00:11:29.083 fused_ordering(565) 00:11:29.083 fused_ordering(566) 00:11:29.083 fused_ordering(567) 00:11:29.083 fused_ordering(568) 00:11:29.083 fused_ordering(569) 00:11:29.083 fused_ordering(570) 00:11:29.083 fused_ordering(571) 00:11:29.083 fused_ordering(572) 00:11:29.083 fused_ordering(573) 00:11:29.083 fused_ordering(574) 00:11:29.083 fused_ordering(575) 00:11:29.083 fused_ordering(576) 00:11:29.083 fused_ordering(577) 00:11:29.083 fused_ordering(578) 00:11:29.083 fused_ordering(579) 00:11:29.083 fused_ordering(580) 00:11:29.083 fused_ordering(581) 00:11:29.083 fused_ordering(582) 00:11:29.083 fused_ordering(583) 00:11:29.083 fused_ordering(584) 00:11:29.083 fused_ordering(585) 00:11:29.083 fused_ordering(586) 00:11:29.083 fused_ordering(587) 00:11:29.083 fused_ordering(588) 00:11:29.083 fused_ordering(589) 00:11:29.083 fused_ordering(590) 00:11:29.083 fused_ordering(591) 00:11:29.083 fused_ordering(592) 00:11:29.083 fused_ordering(593) 00:11:29.083 fused_ordering(594) 00:11:29.083 fused_ordering(595) 00:11:29.083 fused_ordering(596) 00:11:29.083 fused_ordering(597) 00:11:29.083 fused_ordering(598) 00:11:29.083 fused_ordering(599) 00:11:29.083 fused_ordering(600) 00:11:29.083 fused_ordering(601) 00:11:29.083 fused_ordering(602) 00:11:29.083 fused_ordering(603) 00:11:29.083 fused_ordering(604) 00:11:29.083 fused_ordering(605) 00:11:29.083 fused_ordering(606) 00:11:29.083 fused_ordering(607) 00:11:29.083 fused_ordering(608) 00:11:29.083 fused_ordering(609) 00:11:29.083 fused_ordering(610) 00:11:29.083 fused_ordering(611) 00:11:29.083 fused_ordering(612) 00:11:29.083 fused_ordering(613) 00:11:29.083 fused_ordering(614) 00:11:29.083 fused_ordering(615) 00:11:29.083 fused_ordering(616) 00:11:29.083 fused_ordering(617) 00:11:29.083 fused_ordering(618) 00:11:29.083 fused_ordering(619) 00:11:29.083 fused_ordering(620) 00:11:29.083 fused_ordering(621) 00:11:29.083 fused_ordering(622) 00:11:29.083 fused_ordering(623) 00:11:29.083 fused_ordering(624) 00:11:29.083 fused_ordering(625) 00:11:29.083 fused_ordering(626) 00:11:29.083 fused_ordering(627) 00:11:29.083 fused_ordering(628) 00:11:29.083 fused_ordering(629) 00:11:29.083 fused_ordering(630) 00:11:29.083 fused_ordering(631) 00:11:29.083 fused_ordering(632) 00:11:29.083 fused_ordering(633) 00:11:29.083 fused_ordering(634) 00:11:29.083 fused_ordering(635) 00:11:29.083 fused_ordering(636) 00:11:29.083 fused_ordering(637) 00:11:29.083 fused_ordering(638) 00:11:29.083 fused_ordering(639) 00:11:29.083 fused_ordering(640) 00:11:29.083 fused_ordering(641) 00:11:29.083 fused_ordering(642) 00:11:29.083 fused_ordering(643) 00:11:29.083 fused_ordering(644) 00:11:29.083 fused_ordering(645) 00:11:29.083 fused_ordering(646) 00:11:29.083 fused_ordering(647) 00:11:29.083 fused_ordering(648) 00:11:29.083 fused_ordering(649) 00:11:29.083 fused_ordering(650) 00:11:29.083 fused_ordering(651) 00:11:29.083 fused_ordering(652) 00:11:29.083 fused_ordering(653) 00:11:29.083 fused_ordering(654) 00:11:29.083 fused_ordering(655) 00:11:29.083 fused_ordering(656) 00:11:29.083 fused_ordering(657) 00:11:29.083 fused_ordering(658) 00:11:29.083 fused_ordering(659) 00:11:29.083 fused_ordering(660) 00:11:29.083 fused_ordering(661) 00:11:29.083 fused_ordering(662) 00:11:29.083 fused_ordering(663) 00:11:29.083 fused_ordering(664) 00:11:29.083 fused_ordering(665) 00:11:29.083 fused_ordering(666) 00:11:29.083 fused_ordering(667) 00:11:29.083 fused_ordering(668) 00:11:29.083 fused_ordering(669) 00:11:29.083 fused_ordering(670) 00:11:29.083 fused_ordering(671) 00:11:29.083 fused_ordering(672) 00:11:29.083 fused_ordering(673) 00:11:29.083 fused_ordering(674) 00:11:29.083 fused_ordering(675) 00:11:29.084 fused_ordering(676) 00:11:29.084 fused_ordering(677) 00:11:29.084 fused_ordering(678) 00:11:29.084 fused_ordering(679) 00:11:29.084 fused_ordering(680) 00:11:29.084 fused_ordering(681) 00:11:29.084 fused_ordering(682) 00:11:29.084 fused_ordering(683) 00:11:29.084 fused_ordering(684) 00:11:29.084 fused_ordering(685) 00:11:29.084 fused_ordering(686) 00:11:29.084 fused_ordering(687) 00:11:29.084 fused_ordering(688) 00:11:29.084 fused_ordering(689) 00:11:29.084 fused_ordering(690) 00:11:29.084 fused_ordering(691) 00:11:29.084 fused_ordering(692) 00:11:29.084 fused_ordering(693) 00:11:29.084 fused_ordering(694) 00:11:29.084 fused_ordering(695) 00:11:29.084 fused_ordering(696) 00:11:29.084 fused_ordering(697) 00:11:29.084 fused_ordering(698) 00:11:29.084 fused_ordering(699) 00:11:29.084 fused_ordering(700) 00:11:29.084 fused_ordering(701) 00:11:29.084 fused_ordering(702) 00:11:29.084 fused_ordering(703) 00:11:29.084 fused_ordering(704) 00:11:29.084 fused_ordering(705) 00:11:29.084 fused_ordering(706) 00:11:29.084 fused_ordering(707) 00:11:29.084 fused_ordering(708) 00:11:29.084 fused_ordering(709) 00:11:29.084 fused_ordering(710) 00:11:29.084 fused_ordering(711) 00:11:29.084 fused_ordering(712) 00:11:29.084 fused_ordering(713) 00:11:29.084 fused_ordering(714) 00:11:29.084 fused_ordering(715) 00:11:29.084 fused_ordering(716) 00:11:29.084 fused_ordering(717) 00:11:29.084 fused_ordering(718) 00:11:29.084 fused_ordering(719) 00:11:29.084 fused_ordering(720) 00:11:29.084 fused_ordering(721) 00:11:29.084 fused_ordering(722) 00:11:29.084 fused_ordering(723) 00:11:29.084 fused_ordering(724) 00:11:29.084 fused_ordering(725) 00:11:29.084 fused_ordering(726) 00:11:29.084 fused_ordering(727) 00:11:29.084 fused_ordering(728) 00:11:29.084 fused_ordering(729) 00:11:29.084 fused_ordering(730) 00:11:29.084 fused_ordering(731) 00:11:29.084 fused_ordering(732) 00:11:29.084 fused_ordering(733) 00:11:29.084 fused_ordering(734) 00:11:29.084 fused_ordering(735) 00:11:29.084 fused_ordering(736) 00:11:29.084 fused_ordering(737) 00:11:29.084 fused_ordering(738) 00:11:29.084 fused_ordering(739) 00:11:29.084 fused_ordering(740) 00:11:29.084 fused_ordering(741) 00:11:29.084 fused_ordering(742) 00:11:29.084 fused_ordering(743) 00:11:29.084 fused_ordering(744) 00:11:29.084 fused_ordering(745) 00:11:29.084 fused_ordering(746) 00:11:29.084 fused_ordering(747) 00:11:29.084 fused_ordering(748) 00:11:29.084 fused_ordering(749) 00:11:29.084 fused_ordering(750) 00:11:29.084 fused_ordering(751) 00:11:29.084 fused_ordering(752) 00:11:29.084 fused_ordering(753) 00:11:29.084 fused_ordering(754) 00:11:29.084 fused_ordering(755) 00:11:29.084 fused_ordering(756) 00:11:29.084 fused_ordering(757) 00:11:29.084 fused_ordering(758) 00:11:29.084 fused_ordering(759) 00:11:29.084 fused_ordering(760) 00:11:29.084 fused_ordering(761) 00:11:29.084 fused_ordering(762) 00:11:29.084 fused_ordering(763) 00:11:29.084 fused_ordering(764) 00:11:29.084 fused_ordering(765) 00:11:29.084 fused_ordering(766) 00:11:29.084 fused_ordering(767) 00:11:29.084 fused_ordering(768) 00:11:29.084 fused_ordering(769) 00:11:29.084 fused_ordering(770) 00:11:29.084 fused_ordering(771) 00:11:29.084 fused_ordering(772) 00:11:29.084 fused_ordering(773) 00:11:29.084 fused_ordering(774) 00:11:29.084 fused_ordering(775) 00:11:29.084 fused_ordering(776) 00:11:29.084 fused_ordering(777) 00:11:29.084 fused_ordering(778) 00:11:29.084 fused_ordering(779) 00:11:29.084 fused_ordering(780) 00:11:29.084 fused_ordering(781) 00:11:29.084 fused_ordering(782) 00:11:29.084 fused_ordering(783) 00:11:29.084 fused_ordering(784) 00:11:29.084 fused_ordering(785) 00:11:29.084 fused_ordering(786) 00:11:29.084 fused_ordering(787) 00:11:29.084 fused_ordering(788) 00:11:29.084 fused_ordering(789) 00:11:29.084 fused_ordering(790) 00:11:29.084 fused_ordering(791) 00:11:29.084 fused_ordering(792) 00:11:29.084 fused_ordering(793) 00:11:29.084 fused_ordering(794) 00:11:29.084 fused_ordering(795) 00:11:29.084 fused_ordering(796) 00:11:29.084 fused_ordering(797) 00:11:29.084 fused_ordering(798) 00:11:29.084 fused_ordering(799) 00:11:29.084 fused_ordering(800) 00:11:29.084 fused_ordering(801) 00:11:29.084 fused_ordering(802) 00:11:29.084 fused_ordering(803) 00:11:29.084 fused_ordering(804) 00:11:29.084 fused_ordering(805) 00:11:29.084 fused_ordering(806) 00:11:29.084 fused_ordering(807) 00:11:29.084 fused_ordering(808) 00:11:29.084 fused_ordering(809) 00:11:29.084 fused_ordering(810) 00:11:29.084 fused_ordering(811) 00:11:29.084 fused_ordering(812) 00:11:29.084 fused_ordering(813) 00:11:29.084 fused_ordering(814) 00:11:29.084 fused_ordering(815) 00:11:29.084 fused_ordering(816) 00:11:29.084 fused_ordering(817) 00:11:29.084 fused_ordering(818) 00:11:29.084 fused_ordering(819) 00:11:29.084 fused_ordering(820) 00:11:29.344 fused_ordering(821) 00:11:29.344 fused_ordering(822) 00:11:29.344 fused_ordering(823) 00:11:29.344 fused_ordering(824) 00:11:29.344 fused_ordering(825) 00:11:29.344 fused_ordering(826) 00:11:29.344 fused_ordering(827) 00:11:29.344 fused_ordering(828) 00:11:29.344 fused_ordering(829) 00:11:29.344 fused_ordering(830) 00:11:29.344 fused_ordering(831) 00:11:29.344 fused_ordering(832) 00:11:29.344 fused_ordering(833) 00:11:29.344 fused_ordering(834) 00:11:29.344 fused_ordering(835) 00:11:29.344 fused_ordering(836) 00:11:29.344 fused_ordering(837) 00:11:29.344 fused_ordering(838) 00:11:29.344 fused_ordering(839) 00:11:29.344 fused_ordering(840) 00:11:29.344 fused_ordering(841) 00:11:29.344 fused_ordering(842) 00:11:29.344 fused_ordering(843) 00:11:29.344 fused_ordering(844) 00:11:29.344 fused_ordering(845) 00:11:29.344 fused_ordering(846) 00:11:29.344 fused_ordering(847) 00:11:29.344 fused_ordering(848) 00:11:29.344 fused_ordering(849) 00:11:29.344 fused_ordering(850) 00:11:29.344 fused_ordering(851) 00:11:29.344 fused_ordering(852) 00:11:29.344 fused_ordering(853) 00:11:29.344 fused_ordering(854) 00:11:29.344 fused_ordering(855) 00:11:29.344 fused_ordering(856) 00:11:29.344 fused_ordering(857) 00:11:29.344 fused_ordering(858) 00:11:29.344 fused_ordering(859) 00:11:29.344 fused_ordering(860) 00:11:29.344 fused_ordering(861) 00:11:29.344 fused_ordering(862) 00:11:29.344 fused_ordering(863) 00:11:29.344 fused_ordering(864) 00:11:29.344 fused_ordering(865) 00:11:29.344 fused_ordering(866) 00:11:29.344 fused_ordering(867) 00:11:29.344 fused_ordering(868) 00:11:29.344 fused_ordering(869) 00:11:29.344 fused_ordering(870) 00:11:29.344 fused_ordering(871) 00:11:29.344 fused_ordering(872) 00:11:29.344 fused_ordering(873) 00:11:29.344 fused_ordering(874) 00:11:29.344 fused_ordering(875) 00:11:29.344 fused_ordering(876) 00:11:29.344 fused_ordering(877) 00:11:29.344 fused_ordering(878) 00:11:29.344 fused_ordering(879) 00:11:29.344 fused_ordering(880) 00:11:29.344 fused_ordering(881) 00:11:29.344 fused_ordering(882) 00:11:29.344 fused_ordering(883) 00:11:29.344 fused_ordering(884) 00:11:29.344 fused_ordering(885) 00:11:29.344 fused_ordering(886) 00:11:29.344 fused_ordering(887) 00:11:29.344 fused_ordering(888) 00:11:29.344 fused_ordering(889) 00:11:29.344 fused_ordering(890) 00:11:29.344 fused_ordering(891) 00:11:29.344 fused_ordering(892) 00:11:29.344 fused_ordering(893) 00:11:29.344 fused_ordering(894) 00:11:29.344 fused_ordering(895) 00:11:29.344 fused_ordering(896) 00:11:29.344 fused_ordering(897) 00:11:29.344 fused_ordering(898) 00:11:29.344 fused_ordering(899) 00:11:29.344 fused_ordering(900) 00:11:29.344 fused_ordering(901) 00:11:29.344 fused_ordering(902) 00:11:29.344 fused_ordering(903) 00:11:29.344 fused_ordering(904) 00:11:29.344 fused_ordering(905) 00:11:29.344 fused_ordering(906) 00:11:29.344 fused_ordering(907) 00:11:29.344 fused_ordering(908) 00:11:29.344 fused_ordering(909) 00:11:29.344 fused_ordering(910) 00:11:29.344 fused_ordering(911) 00:11:29.344 fused_ordering(912) 00:11:29.344 fused_ordering(913) 00:11:29.344 fused_ordering(914) 00:11:29.344 fused_ordering(915) 00:11:29.344 fused_ordering(916) 00:11:29.344 fused_ordering(917) 00:11:29.344 fused_ordering(918) 00:11:29.344 fused_ordering(919) 00:11:29.344 fused_ordering(920) 00:11:29.344 fused_ordering(921) 00:11:29.344 fused_ordering(922) 00:11:29.344 fused_ordering(923) 00:11:29.344 fused_ordering(924) 00:11:29.344 fused_ordering(925) 00:11:29.344 fused_ordering(926) 00:11:29.344 fused_ordering(927) 00:11:29.344 fused_ordering(928) 00:11:29.344 fused_ordering(929) 00:11:29.344 fused_ordering(930) 00:11:29.344 fused_ordering(931) 00:11:29.344 fused_ordering(932) 00:11:29.344 fused_ordering(933) 00:11:29.344 fused_ordering(934) 00:11:29.344 fused_ordering(935) 00:11:29.344 fused_ordering(936) 00:11:29.344 fused_ordering(937) 00:11:29.344 fused_ordering(938) 00:11:29.344 fused_ordering(939) 00:11:29.344 fused_ordering(940) 00:11:29.344 fused_ordering(941) 00:11:29.344 fused_ordering(942) 00:11:29.344 fused_ordering(943) 00:11:29.344 fused_ordering(944) 00:11:29.344 fused_ordering(945) 00:11:29.344 fused_ordering(946) 00:11:29.344 fused_ordering(947) 00:11:29.344 fused_ordering(948) 00:11:29.344 fused_ordering(949) 00:11:29.344 fused_ordering(950) 00:11:29.344 fused_ordering(951) 00:11:29.344 fused_ordering(952) 00:11:29.344 fused_ordering(953) 00:11:29.344 fused_ordering(954) 00:11:29.344 fused_ordering(955) 00:11:29.344 fused_ordering(956) 00:11:29.344 fused_ordering(957) 00:11:29.344 fused_ordering(958) 00:11:29.344 fused_ordering(959) 00:11:29.344 fused_ordering(960) 00:11:29.344 fused_ordering(961) 00:11:29.344 fused_ordering(962) 00:11:29.344 fused_ordering(963) 00:11:29.344 fused_ordering(964) 00:11:29.344 fused_ordering(965) 00:11:29.344 fused_ordering(966) 00:11:29.344 fused_ordering(967) 00:11:29.344 fused_ordering(968) 00:11:29.344 fused_ordering(969) 00:11:29.344 fused_ordering(970) 00:11:29.344 fused_ordering(971) 00:11:29.344 fused_ordering(972) 00:11:29.344 fused_ordering(973) 00:11:29.344 fused_ordering(974) 00:11:29.344 fused_ordering(975) 00:11:29.344 fused_ordering(976) 00:11:29.344 fused_ordering(977) 00:11:29.344 fused_ordering(978) 00:11:29.344 fused_ordering(979) 00:11:29.344 fused_ordering(980) 00:11:29.344 fused_ordering(981) 00:11:29.344 fused_ordering(982) 00:11:29.344 fused_ordering(983) 00:11:29.344 fused_ordering(984) 00:11:29.344 fused_ordering(985) 00:11:29.344 fused_ordering(986) 00:11:29.344 fused_ordering(987) 00:11:29.344 fused_ordering(988) 00:11:29.344 fused_ordering(989) 00:11:29.344 fused_ordering(990) 00:11:29.344 fused_ordering(991) 00:11:29.344 fused_ordering(992) 00:11:29.344 fused_ordering(993) 00:11:29.344 fused_ordering(994) 00:11:29.344 fused_ordering(995) 00:11:29.344 fused_ordering(996) 00:11:29.344 fused_ordering(997) 00:11:29.344 fused_ordering(998) 00:11:29.344 fused_ordering(999) 00:11:29.344 fused_ordering(1000) 00:11:29.344 fused_ordering(1001) 00:11:29.344 fused_ordering(1002) 00:11:29.344 fused_ordering(1003) 00:11:29.344 fused_ordering(1004) 00:11:29.344 fused_ordering(1005) 00:11:29.344 fused_ordering(1006) 00:11:29.344 fused_ordering(1007) 00:11:29.344 fused_ordering(1008) 00:11:29.344 fused_ordering(1009) 00:11:29.345 fused_ordering(1010) 00:11:29.345 fused_ordering(1011) 00:11:29.345 fused_ordering(1012) 00:11:29.345 fused_ordering(1013) 00:11:29.345 fused_ordering(1014) 00:11:29.345 fused_ordering(1015) 00:11:29.345 fused_ordering(1016) 00:11:29.345 fused_ordering(1017) 00:11:29.345 fused_ordering(1018) 00:11:29.345 fused_ordering(1019) 00:11:29.345 fused_ordering(1020) 00:11:29.345 fused_ordering(1021) 00:11:29.345 fused_ordering(1022) 00:11:29.345 fused_ordering(1023) 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:29.345 rmmod nvme_rdma 00:11:29.345 rmmod nvme_fabrics 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2423768 ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2423768 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2423768 ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2423768 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2423768 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2423768' 00:11:29.345 killing process with pid 2423768 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2423768 00:11:29.345 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2423768 00:11:29.604 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.604 13:41:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:29.604 00:11:29.604 real 0m9.119s 00:11:29.604 user 0m4.824s 00:11:29.604 sys 0m5.679s 00:11:29.604 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.604 13:41:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:29.604 ************************************ 00:11:29.604 END TEST nvmf_fused_ordering 00:11:29.604 ************************************ 00:11:29.604 13:41:56 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:29.604 13:41:56 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:29.604 13:41:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.604 13:41:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.604 13:41:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:29.604 ************************************ 00:11:29.604 START TEST nvmf_delete_subsystem 00:11:29.604 ************************************ 00:11:29.604 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:29.864 * Looking for test storage... 00:11:29.864 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.864 13:41:56 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:36.425 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:36.425 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:36.425 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:36.426 Found net devices under 0000:18:00.0: mlx_0_0 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:36.426 Found net devices under 0000:18:00.1: mlx_0_1 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:36.426 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.426 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:36.426 altname enp24s0f0np0 00:11:36.426 altname ens785f0np0 00:11:36.426 inet 192.168.100.8/24 scope global mlx_0_0 00:11:36.426 valid_lft forever preferred_lft forever 00:11:36.426 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:36.685 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:36.685 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:36.685 altname enp24s0f1np1 00:11:36.685 altname ens785f1np1 00:11:36.685 inet 192.168.100.9/24 scope global mlx_0_1 00:11:36.685 valid_lft forever preferred_lft forever 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:36.685 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:36.686 13:42:02 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:36.686 192.168.100.9' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:36.686 192.168.100.9' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:36.686 192.168.100.9' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2427006 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2427006 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2427006 ']' 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:36.686 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.686 [2024-07-15 13:42:03.137489] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:36.686 [2024-07-15 13:42:03.137547] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.686 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.944 [2024-07-15 13:42:03.224486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.944 [2024-07-15 13:42:03.311632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.944 [2024-07-15 13:42:03.311668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.944 [2024-07-15 13:42:03.311678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.944 [2024-07-15 13:42:03.311687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.944 [2024-07-15 13:42:03.311695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.944 [2024-07-15 13:42:03.311760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.944 [2024-07-15 13:42:03.311761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.512 13:42:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.512 [2024-07-15 13:42:04.021581] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13e1a30/0x13e5f20) succeed. 00:11:37.512 [2024-07-15 13:42:04.030745] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13e2f30/0x14275b0) succeed. 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 [2024-07-15 13:42:04.121234] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 NULL1 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 Delay0 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2427285 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:37.771 13:42:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:37.771 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.771 [2024-07-15 13:42:04.247686] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:39.676 13:42:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.676 13:42:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.676 13:42:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 NVMe io qpair process completion error 00:11:41.053 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.053 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:41.053 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2427285 00:11:41.053 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:41.313 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:41.313 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2427285 00:11:41.313 13:42:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Write completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.883 starting I/O failed: -6 00:11:41.883 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 starting I/O failed: -6 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Read completed with error (sct=0, sc=8) 00:11:41.884 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Read completed with error (sct=0, sc=8) 00:11:41.885 Write completed with error (sct=0, sc=8) 00:11:41.885 Initializing NVMe Controllers 00:11:41.885 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.885 Controller IO queue size 128, less than required. 00:11:41.885 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:41.885 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:41.885 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:41.885 Initialization complete. Launching workers. 00:11:41.885 ======================================================== 00:11:41.885 Latency(us) 00:11:41.885 Device Information : IOPS MiB/s Average min max 00:11:41.885 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.53 0.04 1593104.29 1000084.01 2973492.16 00:11:41.885 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.53 0.04 1594409.50 1001178.44 2974543.00 00:11:41.885 ======================================================== 00:11:41.885 Total : 161.06 0.08 1593756.90 1000084.01 2974543.00 00:11:41.885 00:11:41.885 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:41.885 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2427285 00:11:41.885 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:41.885 [2024-07-15 13:42:08.349405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:41.885 [2024-07-15 13:42:08.349447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:41.885 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2427285 00:11:42.453 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2427285) - No such process 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2427285 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2427285 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2427285 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.453 [2024-07-15 13:42:08.868167] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2428155 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:42.453 13:42:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:42.453 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.712 [2024-07-15 13:42:08.979186] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:42.971 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:42.971 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:42.971 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:43.539 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:43.539 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:43.539 13:42:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.105 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.105 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:44.105 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.673 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.673 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:44.673 13:42:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:44.974 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:44.974 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:44.974 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:45.544 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:45.544 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:45.544 13:42:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.112 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.112 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:46.112 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.408 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.408 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:46.408 13:42:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:46.976 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:46.976 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:46.976 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.543 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.543 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:47.543 13:42:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.111 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.111 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:48.111 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.679 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.679 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:48.679 13:42:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.938 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.938 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:48.938 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.505 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.505 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:49.505 13:42:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.763 Initializing NVMe Controllers 00:11:49.763 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.763 Controller IO queue size 128, less than required. 00:11:49.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:49.764 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:49.764 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:49.764 Initialization complete. Launching workers. 00:11:49.764 ======================================================== 00:11:49.764 Latency(us) 00:11:49.764 Device Information : IOPS MiB/s Average min max 00:11:49.764 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001612.06 1000056.62 1004788.99 00:11:49.764 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002897.71 1000070.72 1006838.76 00:11:49.764 ======================================================== 00:11:49.764 Total : 256.00 0.12 1002254.88 1000056.62 1006838.76 00:11:49.764 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2428155 00:11:50.022 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2428155) - No such process 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2428155 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:50.022 rmmod nvme_rdma 00:11:50.022 rmmod nvme_fabrics 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2427006 ']' 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2427006 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2427006 ']' 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2427006 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.022 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2427006 00:11:50.281 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.281 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.281 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2427006' 00:11:50.281 killing process with pid 2427006 00:11:50.281 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2427006 00:11:50.281 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2427006 00:11:50.541 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.541 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:50.541 00:11:50.541 real 0m20.752s 00:11:50.541 user 0m50.214s 00:11:50.541 sys 0m6.538s 00:11:50.541 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.541 13:42:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 ************************************ 00:11:50.541 END TEST nvmf_delete_subsystem 00:11:50.541 ************************************ 00:11:50.541 13:42:16 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:50.541 13:42:16 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:50.541 13:42:16 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.541 13:42:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.541 13:42:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 ************************************ 00:11:50.541 START TEST nvmf_ns_masking 00:11:50.541 ************************************ 00:11:50.541 13:42:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:50.541 * Looking for test storage... 00:11:50.541 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:50.541 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bf8bf465-b512-4286-8cc1-cd710c147736 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=11a6ad58-feea-4140-b9c0-29be2e424854 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6d109b17-0636-41ed-b05c-5c2c433e053c 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.800 13:42:17 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:57.369 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:57.369 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:57.369 Found net devices under 0000:18:00.0: mlx_0_0 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:57.369 Found net devices under 0000:18:00.1: mlx_0_1 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.369 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:57.370 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.370 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:11:57.370 altname enp24s0f0np0 00:11:57.370 altname ens785f0np0 00:11:57.370 inet 192.168.100.8/24 scope global mlx_0_0 00:11:57.370 valid_lft forever preferred_lft forever 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:57.370 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:57.370 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:11:57.370 altname enp24s0f1np1 00:11:57.370 altname ens785f1np1 00:11:57.370 inet 192.168.100.9/24 scope global mlx_0_1 00:11:57.370 valid_lft forever preferred_lft forever 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:57.370 192.168.100.9' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:57.370 192.168.100.9' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:57.370 192.168.100.9' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:57.370 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2432141 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2432141 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2432141 ']' 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.629 13:42:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.629 [2024-07-15 13:42:23.953947] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:11:57.629 [2024-07-15 13:42:23.954004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.629 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.629 [2024-07-15 13:42:24.037389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.629 [2024-07-15 13:42:24.125847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.629 [2024-07-15 13:42:24.125893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.629 [2024-07-15 13:42:24.125902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.629 [2024-07-15 13:42:24.125910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.629 [2024-07-15 13:42:24.125916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.629 [2024-07-15 13:42:24.125946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.565 13:42:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:58.565 [2024-07-15 13:42:24.998435] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11e4f60/0x11e9450) succeed. 00:11:58.565 [2024-07-15 13:42:25.007639] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11e6460/0x122aae0) succeed. 00:11:58.565 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:58.566 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:58.566 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:58.824 Malloc1 00:11:58.824 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:59.083 Malloc2 00:11:59.083 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.341 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:59.341 13:42:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:59.600 [2024-07-15 13:42:26.009929] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:59.601 13:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:59.601 13:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6d109b17-0636-41ed-b05c-5c2c433e053c -a 192.168.100.8 -s 4420 -i 4 00:11:59.860 13:42:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.860 13:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.860 13:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.860 13:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.860 13:42:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.394 [ 0]:0x1 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8bd5b5bbc4f4876bf295ff0dd9a075e 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8bd5b5bbc4f4876bf295ff0dd9a075e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:02.394 [ 0]:0x1 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8bd5b5bbc4f4876bf295ff0dd9a075e 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8bd5b5bbc4f4876bf295ff0dd9a075e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:02.394 [ 1]:0x2 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:02.394 13:42:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.653 13:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.912 13:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:03.172 13:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:03.172 13:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6d109b17-0636-41ed-b05c-5c2c433e053c -a 192.168.100.8 -s 4420 -i 4 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:03.431 13:42:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.337 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.337 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.337 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.337 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.338 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.597 [ 0]:0x2 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.597 13:42:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.856 [ 0]:0x1 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8bd5b5bbc4f4876bf295ff0dd9a075e 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8bd5b5bbc4f4876bf295ff0dd9a075e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.856 [ 1]:0x2 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.856 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:06.115 [ 0]:0x2 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:06.115 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.374 13:42:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:06.634 13:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:06.634 13:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6d109b17-0636-41ed-b05c-5c2c433e053c -a 192.168.100.8 -s 4420 -i 4 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:06.892 13:42:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.422 [ 0]:0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8bd5b5bbc4f4876bf295ff0dd9a075e 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8bd5b5bbc4f4876bf295ff0dd9a075e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.422 [ 1]:0x2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.422 [ 0]:0x2 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:09.422 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:09.681 [2024-07-15 13:42:35.961491] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:09.681 request: 00:12:09.681 { 00:12:09.681 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.681 "nsid": 2, 00:12:09.681 "host": "nqn.2016-06.io.spdk:host1", 00:12:09.681 "method": "nvmf_ns_remove_host", 00:12:09.681 "req_id": 1 00:12:09.681 } 00:12:09.681 Got JSON-RPC error response 00:12:09.681 response: 00:12:09.681 { 00:12:09.681 "code": -32602, 00:12:09.681 "message": "Invalid parameters" 00:12:09.681 } 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.681 13:42:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.681 [ 0]:0x2 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62695550236f4aa9a94e7786d2a40d69 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62695550236f4aa9a94e7786d2a40d69 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:09.681 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2433947 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2433947 /var/tmp/host.sock 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2433947 ']' 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:09.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.940 13:42:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:09.940 [2024-07-15 13:42:36.455638] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:09.940 [2024-07-15 13:42:36.455703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433947 ] 00:12:10.199 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.199 [2024-07-15 13:42:36.535292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.199 [2024-07-15 13:42:36.621362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.765 13:42:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.765 13:42:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:10.765 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.024 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:11.282 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bf8bf465-b512-4286-8cc1-cd710c147736 00:12:11.282 13:42:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:11.282 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BF8BF465B51242868CC1CD710C147736 -i 00:12:11.542 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 11a6ad58-feea-4140-b9c0-29be2e424854 00:12:11.542 13:42:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:11.542 13:42:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 11A6AD58FEEA4140B9C029BE2E424854 -i 00:12:11.542 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.870 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:12.129 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:12.129 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:12.129 nvme0n1 00:12:12.388 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:12.388 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:12.388 nvme1n2 00:12:12.647 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:12.647 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:12.647 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:12.647 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:12.647 13:42:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:12.647 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:12.647 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:12.647 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:12.647 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:12.905 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bf8bf465-b512-4286-8cc1-cd710c147736 == \b\f\8\b\f\4\6\5\-\b\5\1\2\-\4\2\8\6\-\8\c\c\1\-\c\d\7\1\0\c\1\4\7\7\3\6 ]] 00:12:12.905 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:12.905 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:12.905 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 11a6ad58-feea-4140-b9c0-29be2e424854 == \1\1\a\6\a\d\5\8\-\f\e\e\a\-\4\1\4\0\-\b\9\c\0\-\2\9\b\e\2\e\4\2\4\8\5\4 ]] 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2433947 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2433947 ']' 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2433947 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2433947 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2433947' 00:12:13.164 killing process with pid 2433947 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2433947 00:12:13.164 13:42:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2433947 00:12:13.421 13:42:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:13.679 rmmod nvme_rdma 00:12:13.679 rmmod nvme_fabrics 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:13.679 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2432141 ']' 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2432141 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2432141 ']' 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2432141 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2432141 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2432141' 00:12:13.680 killing process with pid 2432141 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2432141 00:12:13.680 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2432141 00:12:14.248 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:14.248 00:12:14.248 real 0m23.558s 00:12:14.248 user 0m26.544s 00:12:14.248 sys 0m7.666s 00:12:14.248 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.248 13:42:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 ************************************ 00:12:14.248 END TEST nvmf_ns_masking 00:12:14.248 ************************************ 00:12:14.248 13:42:40 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:14.248 13:42:40 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:14.248 13:42:40 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:14.248 13:42:40 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.248 13:42:40 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.248 13:42:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 ************************************ 00:12:14.248 START TEST nvmf_nvme_cli 00:12:14.248 ************************************ 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:14.248 * Looking for test storage... 00:12:14.248 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.248 13:42:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:20.831 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:20.831 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:20.831 Found net devices under 0000:18:00.0: mlx_0_0 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:20.831 Found net devices under 0000:18:00.1: mlx_0_1 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:20.831 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:21.092 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:21.092 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:21.092 altname enp24s0f0np0 00:12:21.092 altname ens785f0np0 00:12:21.092 inet 192.168.100.8/24 scope global mlx_0_0 00:12:21.092 valid_lft forever preferred_lft forever 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:21.092 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:21.092 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:21.092 altname enp24s0f1np1 00:12:21.092 altname ens785f1np1 00:12:21.092 inet 192.168.100.9/24 scope global mlx_0_1 00:12:21.092 valid_lft forever preferred_lft forever 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:21.092 192.168.100.9' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:21.092 192.168.100.9' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:21.092 192.168.100.9' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2437423 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2437423 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2437423 ']' 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.092 13:42:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.092 [2024-07-15 13:42:47.560630] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:21.092 [2024-07-15 13:42:47.560686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.092 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.351 [2024-07-15 13:42:47.647254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.351 [2024-07-15 13:42:47.740030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.351 [2024-07-15 13:42:47.740080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.351 [2024-07-15 13:42:47.740089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.351 [2024-07-15 13:42:47.740114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.351 [2024-07-15 13:42:47.740121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.351 [2024-07-15 13:42:47.740238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.351 [2024-07-15 13:42:47.740345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.351 [2024-07-15 13:42:47.740445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.351 [2024-07-15 13:42:47.740446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.918 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 [2024-07-15 13:42:48.451477] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x236e180/0x2372670) succeed. 00:12:22.176 [2024-07-15 13:42:48.460942] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x236f7c0/0x23b3d00) succeed. 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 Malloc0 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 Malloc1 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 [2024-07-15 13:42:48.657868] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.176 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:22.435 00:12:22.435 Discovery Log Number of Records 2, Generation counter 2 00:12:22.435 =====Discovery Log Entry 0====== 00:12:22.435 trtype: rdma 00:12:22.435 adrfam: ipv4 00:12:22.435 subtype: current discovery subsystem 00:12:22.435 treq: not required 00:12:22.435 portid: 0 00:12:22.435 trsvcid: 4420 00:12:22.435 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.435 traddr: 192.168.100.8 00:12:22.435 eflags: explicit discovery connections, duplicate discovery information 00:12:22.435 rdma_prtype: not specified 00:12:22.436 rdma_qptype: connected 00:12:22.436 rdma_cms: rdma-cm 00:12:22.436 rdma_pkey: 0x0000 00:12:22.436 =====Discovery Log Entry 1====== 00:12:22.436 trtype: rdma 00:12:22.436 adrfam: ipv4 00:12:22.436 subtype: nvme subsystem 00:12:22.436 treq: not required 00:12:22.436 portid: 0 00:12:22.436 trsvcid: 4420 00:12:22.436 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.436 traddr: 192.168.100.8 00:12:22.436 eflags: none 00:12:22.436 rdma_prtype: not specified 00:12:22.436 rdma_qptype: connected 00:12:22.436 rdma_cms: rdma-cm 00:12:22.436 rdma_pkey: 0x0000 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:22.436 13:42:48 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:23.372 13:42:49 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.273 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:25.274 /dev/nvme0n1 ]] 00:12:25.274 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.532 13:42:51 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:26.469 rmmod nvme_rdma 00:12:26.469 rmmod nvme_fabrics 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2437423 ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2437423 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2437423 ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2437423 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2437423 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2437423' 00:12:26.469 killing process with pid 2437423 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2437423 00:12:26.469 13:42:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2437423 00:12:27.037 13:42:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.037 13:42:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:27.037 00:12:27.037 real 0m12.741s 00:12:27.037 user 0m23.873s 00:12:27.037 sys 0m5.960s 00:12:27.037 13:42:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.037 13:42:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:27.038 ************************************ 00:12:27.038 END TEST nvmf_nvme_cli 00:12:27.038 ************************************ 00:12:27.038 13:42:53 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:27.038 13:42:53 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:27.038 13:42:53 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:27.038 13:42:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.038 13:42:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.038 13:42:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:27.038 ************************************ 00:12:27.038 START TEST nvmf_host_management 00:12:27.038 ************************************ 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:27.038 * Looking for test storage... 00:12:27.038 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:27.038 13:42:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:33.606 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:33.606 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.606 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:33.607 Found net devices under 0000:18:00.0: mlx_0_0 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:33.607 Found net devices under 0000:18:00.1: mlx_0_1 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:33.607 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:33.867 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.867 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:33.867 altname enp24s0f0np0 00:12:33.867 altname ens785f0np0 00:12:33.867 inet 192.168.100.8/24 scope global mlx_0_0 00:12:33.867 valid_lft forever preferred_lft forever 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:33.867 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.867 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:33.867 altname enp24s0f1np1 00:12:33.867 altname ens785f1np1 00:12:33.867 inet 192.168.100.9/24 scope global mlx_0_1 00:12:33.867 valid_lft forever preferred_lft forever 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:33.867 192.168.100.9' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:33.867 192.168.100.9' 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:12:33.867 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:33.868 192.168.100.9' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2441106 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2441106 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2441106 ']' 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.868 13:43:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:34.127 [2024-07-15 13:43:00.404034] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:34.127 [2024-07-15 13:43:00.404096] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.127 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.127 [2024-07-15 13:43:00.490943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.127 [2024-07-15 13:43:00.579017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.127 [2024-07-15 13:43:00.579059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.127 [2024-07-15 13:43:00.579074] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.127 [2024-07-15 13:43:00.579084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.127 [2024-07-15 13:43:00.579091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.127 [2024-07-15 13:43:00.579208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.127 [2024-07-15 13:43:00.579309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.127 [2024-07-15 13:43:00.579409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.127 [2024-07-15 13:43:00.579410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 [2024-07-15 13:43:01.297245] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x189d480/0x18a1970) succeed. 00:12:35.063 [2024-07-15 13:43:01.306839] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x189eac0/0x18e3000) succeed. 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 Malloc0 00:12:35.063 [2024-07-15 13:43:01.497457] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2441324 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2441324 /var/tmp/bdevperf.sock 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2441324 ']' 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:35.063 { 00:12:35.063 "params": { 00:12:35.063 "name": "Nvme$subsystem", 00:12:35.063 "trtype": "$TEST_TRANSPORT", 00:12:35.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:35.063 "adrfam": "ipv4", 00:12:35.063 "trsvcid": "$NVMF_PORT", 00:12:35.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:35.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:35.063 "hdgst": ${hdgst:-false}, 00:12:35.063 "ddgst": ${ddgst:-false} 00:12:35.063 }, 00:12:35.063 "method": "bdev_nvme_attach_controller" 00:12:35.063 } 00:12:35.063 EOF 00:12:35.063 )") 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:35.063 13:43:01 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:35.063 "params": { 00:12:35.063 "name": "Nvme0", 00:12:35.063 "trtype": "rdma", 00:12:35.063 "traddr": "192.168.100.8", 00:12:35.063 "adrfam": "ipv4", 00:12:35.063 "trsvcid": "4420", 00:12:35.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:35.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:35.063 "hdgst": false, 00:12:35.063 "ddgst": false 00:12:35.063 }, 00:12:35.063 "method": "bdev_nvme_attach_controller" 00:12:35.063 }' 00:12:35.322 [2024-07-15 13:43:01.605552] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:35.322 [2024-07-15 13:43:01.605624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441324 ] 00:12:35.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.322 [2024-07-15 13:43:01.691665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.322 [2024-07-15 13:43:01.779809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.582 Running I/O for 10 seconds... 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1452 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1452 -ge 100 ']' 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:36.150 [2024-07-15 13:43:02.516026] rdma.c: 864:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 8 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.150 13:43:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:37.085 [2024-07-15 13:43:03.517409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:12:37.085 [2024-07-15 13:43:03.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.517980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.517990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.518000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:12:37.085 [2024-07-15 13:43:03.518020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:12:37.085 [2024-07-15 13:43:03.518039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:12:37.085 [2024-07-15 13:43:03.518059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:12:37.085 [2024-07-15 13:43:03.518079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:12:37.085 [2024-07-15 13:43:03.518100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:12:37.085 [2024-07-15 13:43:03.518120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.085 [2024-07-15 13:43:03.518131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:12:37.086 [2024-07-15 13:43:03.518326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:12:37.086 [2024-07-15 13:43:03.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:12:37.086 [2024-07-15 13:43:03.518653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:12:37.086 [2024-07-15 13:43:03.518674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:12:37.086 [2024-07-15 13:43:03.518693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:12:37.086 [2024-07-15 13:43:03.518713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:12:37.086 [2024-07-15 13:43:03.518733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.518744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:12:37.086 [2024-07-15 13:43:03.518753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:a93a3000 sqhd:52b0 p:0 m:0 dnr:0 00:12:37.086 [2024-07-15 13:43:03.520669] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:12:37.086 [2024-07-15 13:43:03.521571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:37.086 task offset: 73728 on job bdev=Nvme0n1 fails 00:12:37.086 00:12:37.086 Latency(us) 00:12:37.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.086 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:37.086 Job: Nvme0n1 ended in about 1.56 seconds with error 00:12:37.086 Verification LBA range: start 0x0 length 0x400 00:12:37.086 Nvme0n1 : 1.56 1026.96 64.19 41.08 0.00 59373.37 2350.75 1028516.29 00:12:37.086 =================================================================================================================== 00:12:37.086 Total : 1026.96 64.19 41.08 0.00 59373.37 2350.75 1028516.29 00:12:37.086 [2024-07-15 13:43:03.523302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2441324 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.086 { 00:12:37.086 "params": { 00:12:37.086 "name": "Nvme$subsystem", 00:12:37.086 "trtype": "$TEST_TRANSPORT", 00:12:37.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.086 "adrfam": "ipv4", 00:12:37.086 "trsvcid": "$NVMF_PORT", 00:12:37.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.086 "hdgst": ${hdgst:-false}, 00:12:37.086 "ddgst": ${ddgst:-false} 00:12:37.086 }, 00:12:37.086 "method": "bdev_nvme_attach_controller" 00:12:37.086 } 00:12:37.086 EOF 00:12:37.086 )") 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:37.086 13:43:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.086 "params": { 00:12:37.086 "name": "Nvme0", 00:12:37.086 "trtype": "rdma", 00:12:37.086 "traddr": "192.168.100.8", 00:12:37.086 "adrfam": "ipv4", 00:12:37.086 "trsvcid": "4420", 00:12:37.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:37.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:37.086 "hdgst": false, 00:12:37.086 "ddgst": false 00:12:37.086 }, 00:12:37.086 "method": "bdev_nvme_attach_controller" 00:12:37.086 }' 00:12:37.086 [2024-07-15 13:43:03.579837] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:37.086 [2024-07-15 13:43:03.579886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441690 ] 00:12:37.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.344 [2024-07-15 13:43:03.666826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.344 [2024-07-15 13:43:03.751056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.602 Running I/O for 1 seconds... 00:12:38.567 00:12:38.567 Latency(us) 00:12:38.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.567 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:38.567 Verification LBA range: start 0x0 length 0x400 00:12:38.567 Nvme0n1 : 1.01 3026.83 189.18 0.00 0.00 20710.39 662.48 42626.89 00:12:38.567 =================================================================================================================== 00:12:38.567 Total : 3026.83 189.18 0.00 0.00 20710.39 662.48 42626.89 00:12:38.826 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2441324 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:38.826 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:38.827 rmmod nvme_rdma 00:12:38.827 rmmod nvme_fabrics 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2441106 ']' 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2441106 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2441106 ']' 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2441106 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2441106 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2441106' 00:12:38.827 killing process with pid 2441106 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2441106 00:12:38.827 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2441106 00:12:39.085 [2024-07-15 13:43:05.561846] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:39.085 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.085 13:43:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:39.085 13:43:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:39.085 00:12:39.085 real 0m12.191s 00:12:39.085 user 0m25.093s 00:12:39.085 sys 0m6.420s 00:12:39.085 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.085 13:43:05 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 ************************************ 00:12:39.085 END TEST nvmf_host_management 00:12:39.085 ************************************ 00:12:39.344 13:43:05 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:39.344 13:43:05 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:39.344 13:43:05 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.344 13:43:05 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.344 13:43:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:39.344 ************************************ 00:12:39.344 START TEST nvmf_lvol 00:12:39.344 ************************************ 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:39.344 * Looking for test storage... 00:12:39.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:12:39.344 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:39.345 13:43:05 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:45.958 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:45.958 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:45.958 Found net devices under 0000:18:00.0: mlx_0_0 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:45.958 Found net devices under 0000:18:00.1: mlx_0_1 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:12:45.958 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:45.959 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.307 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:46.308 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:46.308 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:12:46.308 altname enp24s0f0np0 00:12:46.308 altname ens785f0np0 00:12:46.308 inet 192.168.100.8/24 scope global mlx_0_0 00:12:46.308 valid_lft forever preferred_lft forever 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:46.308 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:46.308 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:12:46.308 altname enp24s0f1np1 00:12:46.308 altname ens785f1np1 00:12:46.308 inet 192.168.100.9/24 scope global mlx_0_1 00:12:46.308 valid_lft forever preferred_lft forever 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:46.308 192.168.100.9' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:46.308 192.168.100.9' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:46.308 192.168.100.9' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2444789 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2444789 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2444789 ']' 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.308 13:43:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:46.308 [2024-07-15 13:43:12.691122] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:12:46.308 [2024-07-15 13:43:12.691179] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.308 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.308 [2024-07-15 13:43:12.777009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.569 [2024-07-15 13:43:12.872715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.569 [2024-07-15 13:43:12.872755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.569 [2024-07-15 13:43:12.872764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.569 [2024-07-15 13:43:12.872773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.569 [2024-07-15 13:43:12.872781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.569 [2024-07-15 13:43:12.872849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.569 [2024-07-15 13:43:12.872948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.569 [2024-07-15 13:43:12.872949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.136 13:43:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:47.394 [2024-07-15 13:43:13.749668] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe3a780/0xe3ec70) succeed. 00:12:47.394 [2024-07-15 13:43:13.759084] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe3bd20/0xe80300) succeed. 00:12:47.394 13:43:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:47.653 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:47.653 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:47.911 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:47.911 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:48.170 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:48.170 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=209381e2-4305-4864-aec6-4b98e9c647e2 00:12:48.171 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 209381e2-4305-4864-aec6-4b98e9c647e2 lvol 20 00:12:48.429 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=34749029-e92f-454a-95b9-0553cea9e976 00:12:48.429 13:43:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:48.687 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34749029-e92f-454a-95b9-0553cea9e976 00:12:48.945 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:48.945 [2024-07-15 13:43:15.407014] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:48.945 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:49.203 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2445194 00:12:49.203 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:49.203 13:43:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:49.203 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.138 13:43:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 34749029-e92f-454a-95b9-0553cea9e976 MY_SNAPSHOT 00:12:50.396 13:43:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b2d762ca-f7f4-4403-9f51-b6a3c4381e99 00:12:50.396 13:43:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 34749029-e92f-454a-95b9-0553cea9e976 30 00:12:50.654 13:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b2d762ca-f7f4-4403-9f51-b6a3c4381e99 MY_CLONE 00:12:50.912 13:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2f981ee9-f378-4dbf-b71d-db33a9a2b4f6 00:12:50.913 13:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2f981ee9-f378-4dbf-b71d-db33a9a2b4f6 00:12:51.171 13:43:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2445194 00:13:01.146 Initializing NVMe Controllers 00:13:01.146 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:01.146 Controller IO queue size 128, less than required. 00:13:01.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:01.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:01.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:01.146 Initialization complete. Launching workers. 00:13:01.146 ======================================================== 00:13:01.146 Latency(us) 00:13:01.146 Device Information : IOPS MiB/s Average min max 00:13:01.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16613.49 64.90 7706.96 2370.12 52476.56 00:13:01.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16403.99 64.08 7804.56 3650.97 48741.29 00:13:01.146 ======================================================== 00:13:01.146 Total : 33017.48 128.97 7755.45 2370.12 52476.56 00:13:01.146 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34749029-e92f-454a-95b9-0553cea9e976 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 209381e2-4305-4864-aec6-4b98e9c647e2 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:01.146 rmmod nvme_rdma 00:13:01.146 rmmod nvme_fabrics 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2444789 ']' 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2444789 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2444789 ']' 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2444789 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.146 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2444789 00:13:01.405 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.405 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.405 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2444789' 00:13:01.405 killing process with pid 2444789 00:13:01.405 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2444789 00:13:01.405 13:43:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2444789 00:13:01.664 13:43:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:01.664 13:43:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:01.664 00:13:01.664 real 0m22.361s 00:13:01.664 user 1m11.861s 00:13:01.664 sys 0m6.591s 00:13:01.664 13:43:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.664 13:43:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:01.664 ************************************ 00:13:01.664 END TEST nvmf_lvol 00:13:01.664 ************************************ 00:13:01.664 13:43:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:01.664 13:43:28 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:01.664 13:43:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:01.664 13:43:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.664 13:43:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:01.664 ************************************ 00:13:01.664 START TEST nvmf_lvs_grow 00:13:01.664 ************************************ 00:13:01.664 13:43:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:01.924 * Looking for test storage... 00:13:01.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.924 13:43:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.491 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:08.492 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:08.492 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:08.492 Found net devices under 0000:18:00.0: mlx_0_0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:08.492 Found net devices under 0000:18:00.1: mlx_0_1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:08.492 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:08.492 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:08.492 altname enp24s0f0np0 00:13:08.492 altname ens785f0np0 00:13:08.492 inet 192.168.100.8/24 scope global mlx_0_0 00:13:08.492 valid_lft forever preferred_lft forever 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:08.492 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:08.492 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:08.492 altname enp24s0f1np1 00:13:08.492 altname ens785f1np1 00:13:08.492 inet 192.168.100.9/24 scope global mlx_0_1 00:13:08.492 valid_lft forever preferred_lft forever 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:08.492 13:43:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:08.492 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:08.751 192.168.100.9' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:08.751 192.168.100.9' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:08.751 192.168.100.9' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:08.751 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2449741 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2449741 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2449741 ']' 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.752 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:08.752 [2024-07-15 13:43:35.147997] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:08.752 [2024-07-15 13:43:35.148059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.752 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.752 [2024-07-15 13:43:35.215926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.010 [2024-07-15 13:43:35.305584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.010 [2024-07-15 13:43:35.305621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.010 [2024-07-15 13:43:35.305631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.010 [2024-07-15 13:43:35.305640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.010 [2024-07-15 13:43:35.305647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.010 [2024-07-15 13:43:35.305674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.576 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.576 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:09.576 13:43:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.576 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:09.577 13:43:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:09.577 13:43:36 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.577 13:43:36 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:09.835 [2024-07-15 13:43:36.183469] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1503f60/0x1508450) succeed. 00:13:09.835 [2024-07-15 13:43:36.192247] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1505460/0x1549ae0) succeed. 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:09.835 ************************************ 00:13:09.835 START TEST lvs_grow_clean 00:13:09.835 ************************************ 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:09.835 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.094 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:10.094 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:10.353 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bb800d1-c700-4b13-aa55-1657d4911036 00:13:10.353 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:10.353 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:10.612 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:10.612 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:10.612 13:43:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bb800d1-c700-4b13-aa55-1657d4911036 lvol 150 00:13:10.612 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2befbe32-0c23-4b4a-85e9-1827d3f22470 00:13:10.612 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:10.612 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:10.871 [2024-07-15 13:43:37.229016] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:10.871 [2024-07-15 13:43:37.229071] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:10.871 true 00:13:10.871 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:10.871 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:11.130 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:11.130 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:11.130 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2befbe32-0c23-4b4a-85e9-1827d3f22470 00:13:11.389 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:11.389 [2024-07-15 13:43:37.891107] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:11.389 13:43:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2450251 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2450251 /var/tmp/bdevperf.sock 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2450251 ']' 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:11.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.648 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:11.648 [2024-07-15 13:43:38.122519] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:11.648 [2024-07-15 13:43:38.122601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2450251 ] 00:13:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.906 [2024-07-15 13:43:38.210003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.906 [2024-07-15 13:43:38.296442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.474 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.474 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:12.474 13:43:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:12.733 Nvme0n1 00:13:12.733 13:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:12.992 [ 00:13:12.992 { 00:13:12.992 "name": "Nvme0n1", 00:13:12.992 "aliases": [ 00:13:12.992 "2befbe32-0c23-4b4a-85e9-1827d3f22470" 00:13:12.992 ], 00:13:12.992 "product_name": "NVMe disk", 00:13:12.992 "block_size": 4096, 00:13:12.992 "num_blocks": 38912, 00:13:12.992 "uuid": "2befbe32-0c23-4b4a-85e9-1827d3f22470", 00:13:12.992 "assigned_rate_limits": { 00:13:12.992 "rw_ios_per_sec": 0, 00:13:12.992 "rw_mbytes_per_sec": 0, 00:13:12.992 "r_mbytes_per_sec": 0, 00:13:12.992 "w_mbytes_per_sec": 0 00:13:12.992 }, 00:13:12.992 "claimed": false, 00:13:12.992 "zoned": false, 00:13:12.992 "supported_io_types": { 00:13:12.992 "read": true, 00:13:12.992 "write": true, 00:13:12.992 "unmap": true, 00:13:12.992 "flush": true, 00:13:12.992 "reset": true, 00:13:12.992 "nvme_admin": true, 00:13:12.992 "nvme_io": true, 00:13:12.992 "nvme_io_md": false, 00:13:12.992 "write_zeroes": true, 00:13:12.992 "zcopy": false, 00:13:12.992 "get_zone_info": false, 00:13:12.992 "zone_management": false, 00:13:12.992 "zone_append": false, 00:13:12.992 "compare": true, 00:13:12.992 "compare_and_write": true, 00:13:12.992 "abort": true, 00:13:12.992 "seek_hole": false, 00:13:12.992 "seek_data": false, 00:13:12.992 "copy": true, 00:13:12.992 "nvme_iov_md": false 00:13:12.992 }, 00:13:12.992 "memory_domains": [ 00:13:12.992 { 00:13:12.992 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:12.992 "dma_device_type": 0 00:13:12.992 } 00:13:12.992 ], 00:13:12.992 "driver_specific": { 00:13:12.992 "nvme": [ 00:13:12.992 { 00:13:12.992 "trid": { 00:13:12.992 "trtype": "RDMA", 00:13:12.992 "adrfam": "IPv4", 00:13:12.992 "traddr": "192.168.100.8", 00:13:12.992 "trsvcid": "4420", 00:13:12.992 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:12.992 }, 00:13:12.992 "ctrlr_data": { 00:13:12.992 "cntlid": 1, 00:13:12.992 "vendor_id": "0x8086", 00:13:12.992 "model_number": "SPDK bdev Controller", 00:13:12.992 "serial_number": "SPDK0", 00:13:12.992 "firmware_revision": "24.09", 00:13:12.992 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:12.992 "oacs": { 00:13:12.992 "security": 0, 00:13:12.992 "format": 0, 00:13:12.992 "firmware": 0, 00:13:12.992 "ns_manage": 0 00:13:12.992 }, 00:13:12.992 "multi_ctrlr": true, 00:13:12.992 "ana_reporting": false 00:13:12.992 }, 00:13:12.992 "vs": { 00:13:12.992 "nvme_version": "1.3" 00:13:12.992 }, 00:13:12.992 "ns_data": { 00:13:12.992 "id": 1, 00:13:12.992 "can_share": true 00:13:12.992 } 00:13:12.992 } 00:13:12.992 ], 00:13:12.992 "mp_policy": "active_passive" 00:13:12.992 } 00:13:12.992 } 00:13:12.992 ] 00:13:12.992 13:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2450397 00:13:12.992 13:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:12.992 13:43:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:12.992 Running I/O for 10 seconds... 00:13:14.370 Latency(us) 00:13:14.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.370 Nvme0n1 : 1.00 34081.00 133.13 0.00 0.00 0.00 0.00 0.00 00:13:14.370 =================================================================================================================== 00:13:14.370 Total : 34081.00 133.13 0.00 0.00 0.00 0.00 0.00 00:13:14.370 00:13:14.939 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:15.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.198 Nvme0n1 : 2.00 34175.50 133.50 0.00 0.00 0.00 0.00 0.00 00:13:15.198 =================================================================================================================== 00:13:15.198 Total : 34175.50 133.50 0.00 0.00 0.00 0.00 0.00 00:13:15.198 00:13:15.198 true 00:13:15.198 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:15.198 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:15.457 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:15.457 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:15.457 13:43:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2450397 00:13:16.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.050 Nvme0n1 : 3.00 34315.00 134.04 0.00 0.00 0.00 0.00 0.00 00:13:16.050 =================================================================================================================== 00:13:16.050 Total : 34315.00 134.04 0.00 0.00 0.00 0.00 0.00 00:13:16.050 00:13:16.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.987 Nvme0n1 : 4.00 34496.25 134.75 0.00 0.00 0.00 0.00 0.00 00:13:16.987 =================================================================================================================== 00:13:16.987 Total : 34496.25 134.75 0.00 0.00 0.00 0.00 0.00 00:13:16.987 00:13:18.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.365 Nvme0n1 : 5.00 34617.20 135.22 0.00 0.00 0.00 0.00 0.00 00:13:18.365 =================================================================================================================== 00:13:18.365 Total : 34617.20 135.22 0.00 0.00 0.00 0.00 0.00 00:13:18.365 00:13:19.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.355 Nvme0n1 : 6.00 34682.83 135.48 0.00 0.00 0.00 0.00 0.00 00:13:19.355 =================================================================================================================== 00:13:19.355 Total : 34682.83 135.48 0.00 0.00 0.00 0.00 0.00 00:13:19.355 00:13:20.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.306 Nvme0n1 : 7.00 34738.00 135.70 0.00 0.00 0.00 0.00 0.00 00:13:20.306 =================================================================================================================== 00:13:20.306 Total : 34738.00 135.70 0.00 0.00 0.00 0.00 0.00 00:13:20.306 00:13:21.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.243 Nvme0n1 : 8.00 34783.75 135.87 0.00 0.00 0.00 0.00 0.00 00:13:21.243 =================================================================================================================== 00:13:21.243 Total : 34783.75 135.87 0.00 0.00 0.00 0.00 0.00 00:13:21.243 00:13:22.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.179 Nvme0n1 : 9.00 34815.89 136.00 0.00 0.00 0.00 0.00 0.00 00:13:22.179 =================================================================================================================== 00:13:22.179 Total : 34815.89 136.00 0.00 0.00 0.00 0.00 0.00 00:13:22.179 00:13:23.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.116 Nvme0n1 : 10.00 34844.40 136.11 0.00 0.00 0.00 0.00 0.00 00:13:23.116 =================================================================================================================== 00:13:23.116 Total : 34844.40 136.11 0.00 0.00 0.00 0.00 0.00 00:13:23.117 00:13:23.117 00:13:23.117 Latency(us) 00:13:23.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.117 Nvme0n1 : 10.00 34845.39 136.11 0.00 0.00 3670.91 2621.44 9915.88 00:13:23.117 =================================================================================================================== 00:13:23.117 Total : 34845.39 136.11 0.00 0.00 3670.91 2621.44 9915.88 00:13:23.117 0 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2450251 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2450251 ']' 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2450251 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2450251 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2450251' 00:13:23.117 killing process with pid 2450251 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2450251 00:13:23.117 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.117 00:13:23.117 Latency(us) 00:13:23.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.117 =================================================================================================================== 00:13:23.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:23.117 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2450251 00:13:23.375 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:23.633 13:43:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:23.891 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:23.891 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:23.891 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:23.891 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:23.891 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:24.149 [2024-07-15 13:43:50.519995] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:24.149 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:24.407 request: 00:13:24.407 { 00:13:24.407 "uuid": "4bb800d1-c700-4b13-aa55-1657d4911036", 00:13:24.407 "method": "bdev_lvol_get_lvstores", 00:13:24.407 "req_id": 1 00:13:24.407 } 00:13:24.407 Got JSON-RPC error response 00:13:24.407 response: 00:13:24.407 { 00:13:24.407 "code": -19, 00:13:24.407 "message": "No such device" 00:13:24.407 } 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:24.407 aio_bdev 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2befbe32-0c23-4b4a-85e9-1827d3f22470 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=2befbe32-0c23-4b4a-85e9-1827d3f22470 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:24.407 13:43:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:24.665 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2befbe32-0c23-4b4a-85e9-1827d3f22470 -t 2000 00:13:24.972 [ 00:13:24.972 { 00:13:24.972 "name": "2befbe32-0c23-4b4a-85e9-1827d3f22470", 00:13:24.972 "aliases": [ 00:13:24.972 "lvs/lvol" 00:13:24.972 ], 00:13:24.972 "product_name": "Logical Volume", 00:13:24.972 "block_size": 4096, 00:13:24.972 "num_blocks": 38912, 00:13:24.972 "uuid": "2befbe32-0c23-4b4a-85e9-1827d3f22470", 00:13:24.972 "assigned_rate_limits": { 00:13:24.972 "rw_ios_per_sec": 0, 00:13:24.972 "rw_mbytes_per_sec": 0, 00:13:24.972 "r_mbytes_per_sec": 0, 00:13:24.972 "w_mbytes_per_sec": 0 00:13:24.972 }, 00:13:24.972 "claimed": false, 00:13:24.972 "zoned": false, 00:13:24.972 "supported_io_types": { 00:13:24.972 "read": true, 00:13:24.972 "write": true, 00:13:24.972 "unmap": true, 00:13:24.972 "flush": false, 00:13:24.972 "reset": true, 00:13:24.972 "nvme_admin": false, 00:13:24.972 "nvme_io": false, 00:13:24.972 "nvme_io_md": false, 00:13:24.972 "write_zeroes": true, 00:13:24.972 "zcopy": false, 00:13:24.972 "get_zone_info": false, 00:13:24.972 "zone_management": false, 00:13:24.972 "zone_append": false, 00:13:24.972 "compare": false, 00:13:24.972 "compare_and_write": false, 00:13:24.972 "abort": false, 00:13:24.972 "seek_hole": true, 00:13:24.972 "seek_data": true, 00:13:24.972 "copy": false, 00:13:24.972 "nvme_iov_md": false 00:13:24.972 }, 00:13:24.972 "driver_specific": { 00:13:24.972 "lvol": { 00:13:24.972 "lvol_store_uuid": "4bb800d1-c700-4b13-aa55-1657d4911036", 00:13:24.972 "base_bdev": "aio_bdev", 00:13:24.972 "thin_provision": false, 00:13:24.972 "num_allocated_clusters": 38, 00:13:24.972 "snapshot": false, 00:13:24.972 "clone": false, 00:13:24.972 "esnap_clone": false 00:13:24.972 } 00:13:24.972 } 00:13:24.972 } 00:13:24.972 ] 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:24.972 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:25.230 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:25.230 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2befbe32-0c23-4b4a-85e9-1827d3f22470 00:13:25.502 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bb800d1-c700-4b13-aa55-1657d4911036 00:13:25.502 13:43:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.766 00:13:25.766 real 0m15.860s 00:13:25.766 user 0m15.661s 00:13:25.766 sys 0m1.297s 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:25.766 ************************************ 00:13:25.766 END TEST lvs_grow_clean 00:13:25.766 ************************************ 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:25.766 ************************************ 00:13:25.766 START TEST lvs_grow_dirty 00:13:25.766 ************************************ 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.766 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:26.025 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:26.025 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:26.282 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=255c45e4-5070-427c-a046-6cfcae4d198f 00:13:26.282 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:26.282 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:26.540 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:26.540 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:26.540 13:43:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 255c45e4-5070-427c-a046-6cfcae4d198f lvol 150 00:13:26.540 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:26.541 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:26.541 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:26.799 [2024-07-15 13:43:53.195509] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:26.799 [2024-07-15 13:43:53.195585] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:26.799 true 00:13:26.799 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:26.799 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:27.057 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:27.057 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:27.057 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:27.316 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:27.575 [2024-07-15 13:43:53.893716] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:27.575 13:43:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2452386 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2452386 /var/tmp/bdevperf.sock 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2452386 ']' 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.575 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:27.834 [2024-07-15 13:43:54.126351] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:27.834 [2024-07-15 13:43:54.126411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452386 ] 00:13:27.834 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.834 [2024-07-15 13:43:54.212311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.834 [2024-07-15 13:43:54.293484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.772 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.772 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:28.772 13:43:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:28.772 Nvme0n1 00:13:28.772 13:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:29.030 [ 00:13:29.030 { 00:13:29.030 "name": "Nvme0n1", 00:13:29.030 "aliases": [ 00:13:29.030 "4eab338c-7e95-4b20-905d-47e1e08a73f9" 00:13:29.031 ], 00:13:29.031 "product_name": "NVMe disk", 00:13:29.031 "block_size": 4096, 00:13:29.031 "num_blocks": 38912, 00:13:29.031 "uuid": "4eab338c-7e95-4b20-905d-47e1e08a73f9", 00:13:29.031 "assigned_rate_limits": { 00:13:29.031 "rw_ios_per_sec": 0, 00:13:29.031 "rw_mbytes_per_sec": 0, 00:13:29.031 "r_mbytes_per_sec": 0, 00:13:29.031 "w_mbytes_per_sec": 0 00:13:29.031 }, 00:13:29.031 "claimed": false, 00:13:29.031 "zoned": false, 00:13:29.031 "supported_io_types": { 00:13:29.031 "read": true, 00:13:29.031 "write": true, 00:13:29.031 "unmap": true, 00:13:29.031 "flush": true, 00:13:29.031 "reset": true, 00:13:29.031 "nvme_admin": true, 00:13:29.031 "nvme_io": true, 00:13:29.031 "nvme_io_md": false, 00:13:29.031 "write_zeroes": true, 00:13:29.031 "zcopy": false, 00:13:29.031 "get_zone_info": false, 00:13:29.031 "zone_management": false, 00:13:29.031 "zone_append": false, 00:13:29.031 "compare": true, 00:13:29.031 "compare_and_write": true, 00:13:29.031 "abort": true, 00:13:29.031 "seek_hole": false, 00:13:29.031 "seek_data": false, 00:13:29.031 "copy": true, 00:13:29.031 "nvme_iov_md": false 00:13:29.031 }, 00:13:29.031 "memory_domains": [ 00:13:29.031 { 00:13:29.031 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:29.031 "dma_device_type": 0 00:13:29.031 } 00:13:29.031 ], 00:13:29.031 "driver_specific": { 00:13:29.031 "nvme": [ 00:13:29.031 { 00:13:29.031 "trid": { 00:13:29.031 "trtype": "RDMA", 00:13:29.031 "adrfam": "IPv4", 00:13:29.031 "traddr": "192.168.100.8", 00:13:29.031 "trsvcid": "4420", 00:13:29.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:29.031 }, 00:13:29.031 "ctrlr_data": { 00:13:29.031 "cntlid": 1, 00:13:29.031 "vendor_id": "0x8086", 00:13:29.031 "model_number": "SPDK bdev Controller", 00:13:29.031 "serial_number": "SPDK0", 00:13:29.031 "firmware_revision": "24.09", 00:13:29.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:29.031 "oacs": { 00:13:29.031 "security": 0, 00:13:29.031 "format": 0, 00:13:29.031 "firmware": 0, 00:13:29.031 "ns_manage": 0 00:13:29.031 }, 00:13:29.031 "multi_ctrlr": true, 00:13:29.031 "ana_reporting": false 00:13:29.031 }, 00:13:29.031 "vs": { 00:13:29.031 "nvme_version": "1.3" 00:13:29.031 }, 00:13:29.031 "ns_data": { 00:13:29.031 "id": 1, 00:13:29.031 "can_share": true 00:13:29.031 } 00:13:29.031 } 00:13:29.031 ], 00:13:29.031 "mp_policy": "active_passive" 00:13:29.031 } 00:13:29.031 } 00:13:29.031 ] 00:13:29.031 13:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2452572 00:13:29.031 13:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:29.031 13:43:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:29.031 Running I/O for 10 seconds... 00:13:29.967 Latency(us) 00:13:29.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.967 Nvme0n1 : 1.00 33408.00 130.50 0.00 0.00 0.00 0.00 0.00 00:13:29.967 =================================================================================================================== 00:13:29.967 Total : 33408.00 130.50 0.00 0.00 0.00 0.00 0.00 00:13:29.967 00:13:30.903 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:31.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.162 Nvme0n1 : 2.00 34081.00 133.13 0.00 0.00 0.00 0.00 0.00 00:13:31.162 =================================================================================================================== 00:13:31.162 Total : 34081.00 133.13 0.00 0.00 0.00 0.00 0.00 00:13:31.162 00:13:31.162 true 00:13:31.162 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:31.162 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:31.421 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:31.421 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:31.421 13:43:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2452572 00:13:31.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.990 Nvme0n1 : 3.00 34315.33 134.04 0.00 0.00 0.00 0.00 0.00 00:13:31.990 =================================================================================================================== 00:13:31.990 Total : 34315.33 134.04 0.00 0.00 0.00 0.00 0.00 00:13:31.990 00:13:33.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.370 Nvme0n1 : 4.00 34480.00 134.69 0.00 0.00 0.00 0.00 0.00 00:13:33.370 =================================================================================================================== 00:13:33.370 Total : 34480.00 134.69 0.00 0.00 0.00 0.00 0.00 00:13:33.370 00:13:34.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.307 Nvme0n1 : 5.00 34534.60 134.90 0.00 0.00 0.00 0.00 0.00 00:13:34.307 =================================================================================================================== 00:13:34.307 Total : 34534.60 134.90 0.00 0.00 0.00 0.00 0.00 00:13:34.307 00:13:35.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.244 Nvme0n1 : 6.00 34623.83 135.25 0.00 0.00 0.00 0.00 0.00 00:13:35.244 =================================================================================================================== 00:13:35.244 Total : 34623.83 135.25 0.00 0.00 0.00 0.00 0.00 00:13:35.244 00:13:36.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.182 Nvme0n1 : 7.00 34697.29 135.54 0.00 0.00 0.00 0.00 0.00 00:13:36.182 =================================================================================================================== 00:13:36.182 Total : 34697.29 135.54 0.00 0.00 0.00 0.00 0.00 00:13:36.182 00:13:37.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.119 Nvme0n1 : 8.00 34743.88 135.72 0.00 0.00 0.00 0.00 0.00 00:13:37.119 =================================================================================================================== 00:13:37.119 Total : 34743.88 135.72 0.00 0.00 0.00 0.00 0.00 00:13:37.119 00:13:38.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.055 Nvme0n1 : 9.00 34745.00 135.72 0.00 0.00 0.00 0.00 0.00 00:13:38.055 =================================================================================================================== 00:13:38.055 Total : 34745.00 135.72 0.00 0.00 0.00 0.00 0.00 00:13:38.055 00:13:38.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.992 Nvme0n1 : 10.00 34784.10 135.88 0.00 0.00 0.00 0.00 0.00 00:13:38.992 =================================================================================================================== 00:13:38.992 Total : 34784.10 135.88 0.00 0.00 0.00 0.00 0.00 00:13:38.992 00:13:38.992 00:13:38.992 Latency(us) 00:13:38.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.992 Nvme0n1 : 10.00 34785.30 135.88 0.00 0.00 3677.24 2521.71 12195.39 00:13:38.992 =================================================================================================================== 00:13:38.992 Total : 34785.30 135.88 0.00 0.00 3677.24 2521.71 12195.39 00:13:38.992 0 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2452386 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2452386 ']' 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2452386 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2452386 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2452386' 00:13:39.276 killing process with pid 2452386 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2452386 00:13:39.276 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.276 00:13:39.276 Latency(us) 00:13:39.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.276 =================================================================================================================== 00:13:39.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.276 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2452386 00:13:39.534 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:39.534 13:44:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:39.793 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:39.793 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2449741 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2449741 00:13:40.067 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2449741 Killed "${NVMF_APP[@]}" "$@" 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2454030 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2454030 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2454030 ']' 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.067 13:44:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:40.067 [2024-07-15 13:44:06.465401] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:40.067 [2024-07-15 13:44:06.465463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.067 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.067 [2024-07-15 13:44:06.553898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.332 [2024-07-15 13:44:06.644406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.332 [2024-07-15 13:44:06.644452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.332 [2024-07-15 13:44:06.644461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.332 [2024-07-15 13:44:06.644485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.332 [2024-07-15 13:44:06.644492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.332 [2024-07-15 13:44:06.644514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.901 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:41.211 [2024-07-15 13:44:07.478176] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:41.211 [2024-07-15 13:44:07.478274] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:41.211 [2024-07-15 13:44:07.478301] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:41.211 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4eab338c-7e95-4b20-905d-47e1e08a73f9 -t 2000 00:13:41.469 [ 00:13:41.469 { 00:13:41.469 "name": "4eab338c-7e95-4b20-905d-47e1e08a73f9", 00:13:41.469 "aliases": [ 00:13:41.469 "lvs/lvol" 00:13:41.469 ], 00:13:41.469 "product_name": "Logical Volume", 00:13:41.469 "block_size": 4096, 00:13:41.469 "num_blocks": 38912, 00:13:41.469 "uuid": "4eab338c-7e95-4b20-905d-47e1e08a73f9", 00:13:41.469 "assigned_rate_limits": { 00:13:41.469 "rw_ios_per_sec": 0, 00:13:41.469 "rw_mbytes_per_sec": 0, 00:13:41.469 "r_mbytes_per_sec": 0, 00:13:41.469 "w_mbytes_per_sec": 0 00:13:41.469 }, 00:13:41.469 "claimed": false, 00:13:41.469 "zoned": false, 00:13:41.469 "supported_io_types": { 00:13:41.469 "read": true, 00:13:41.469 "write": true, 00:13:41.469 "unmap": true, 00:13:41.469 "flush": false, 00:13:41.469 "reset": true, 00:13:41.469 "nvme_admin": false, 00:13:41.469 "nvme_io": false, 00:13:41.469 "nvme_io_md": false, 00:13:41.469 "write_zeroes": true, 00:13:41.469 "zcopy": false, 00:13:41.469 "get_zone_info": false, 00:13:41.469 "zone_management": false, 00:13:41.469 "zone_append": false, 00:13:41.469 "compare": false, 00:13:41.469 "compare_and_write": false, 00:13:41.469 "abort": false, 00:13:41.469 "seek_hole": true, 00:13:41.469 "seek_data": true, 00:13:41.469 "copy": false, 00:13:41.469 "nvme_iov_md": false 00:13:41.469 }, 00:13:41.469 "driver_specific": { 00:13:41.469 "lvol": { 00:13:41.469 "lvol_store_uuid": "255c45e4-5070-427c-a046-6cfcae4d198f", 00:13:41.469 "base_bdev": "aio_bdev", 00:13:41.469 "thin_provision": false, 00:13:41.469 "num_allocated_clusters": 38, 00:13:41.469 "snapshot": false, 00:13:41.469 "clone": false, 00:13:41.469 "esnap_clone": false 00:13:41.469 } 00:13:41.469 } 00:13:41.469 } 00:13:41.469 ] 00:13:41.469 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:41.469 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:41.469 13:44:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:41.728 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:41.728 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:41.728 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:41.728 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:41.728 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:41.986 [2024-07-15 13:44:08.362243] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:41.986 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:42.245 request: 00:13:42.245 { 00:13:42.245 "uuid": "255c45e4-5070-427c-a046-6cfcae4d198f", 00:13:42.245 "method": "bdev_lvol_get_lvstores", 00:13:42.245 "req_id": 1 00:13:42.245 } 00:13:42.245 Got JSON-RPC error response 00:13:42.245 response: 00:13:42.245 { 00:13:42.245 "code": -19, 00:13:42.245 "message": "No such device" 00:13:42.245 } 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:42.245 aio_bdev 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.245 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:42.504 13:44:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4eab338c-7e95-4b20-905d-47e1e08a73f9 -t 2000 00:13:42.832 [ 00:13:42.832 { 00:13:42.832 "name": "4eab338c-7e95-4b20-905d-47e1e08a73f9", 00:13:42.832 "aliases": [ 00:13:42.832 "lvs/lvol" 00:13:42.832 ], 00:13:42.832 "product_name": "Logical Volume", 00:13:42.832 "block_size": 4096, 00:13:42.832 "num_blocks": 38912, 00:13:42.832 "uuid": "4eab338c-7e95-4b20-905d-47e1e08a73f9", 00:13:42.832 "assigned_rate_limits": { 00:13:42.832 "rw_ios_per_sec": 0, 00:13:42.832 "rw_mbytes_per_sec": 0, 00:13:42.832 "r_mbytes_per_sec": 0, 00:13:42.832 "w_mbytes_per_sec": 0 00:13:42.832 }, 00:13:42.832 "claimed": false, 00:13:42.832 "zoned": false, 00:13:42.832 "supported_io_types": { 00:13:42.832 "read": true, 00:13:42.832 "write": true, 00:13:42.832 "unmap": true, 00:13:42.832 "flush": false, 00:13:42.832 "reset": true, 00:13:42.832 "nvme_admin": false, 00:13:42.832 "nvme_io": false, 00:13:42.832 "nvme_io_md": false, 00:13:42.832 "write_zeroes": true, 00:13:42.832 "zcopy": false, 00:13:42.832 "get_zone_info": false, 00:13:42.832 "zone_management": false, 00:13:42.832 "zone_append": false, 00:13:42.832 "compare": false, 00:13:42.832 "compare_and_write": false, 00:13:42.832 "abort": false, 00:13:42.832 "seek_hole": true, 00:13:42.832 "seek_data": true, 00:13:42.832 "copy": false, 00:13:42.832 "nvme_iov_md": false 00:13:42.832 }, 00:13:42.832 "driver_specific": { 00:13:42.832 "lvol": { 00:13:42.832 "lvol_store_uuid": "255c45e4-5070-427c-a046-6cfcae4d198f", 00:13:42.832 "base_bdev": "aio_bdev", 00:13:42.832 "thin_provision": false, 00:13:42.832 "num_allocated_clusters": 38, 00:13:42.832 "snapshot": false, 00:13:42.832 "clone": false, 00:13:42.832 "esnap_clone": false 00:13:42.832 } 00:13:42.832 } 00:13:42.832 } 00:13:42.832 ] 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:42.832 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:43.091 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:43.091 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4eab338c-7e95-4b20-905d-47e1e08a73f9 00:13:43.350 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 255c45e4-5070-427c-a046-6cfcae4d198f 00:13:43.350 13:44:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:43.610 00:13:43.610 real 0m17.802s 00:13:43.610 user 0m46.009s 00:13:43.610 sys 0m3.440s 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:43.610 ************************************ 00:13:43.610 END TEST lvs_grow_dirty 00:13:43.610 ************************************ 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:43.610 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:43.610 nvmf_trace.0 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:43.894 rmmod nvme_rdma 00:13:43.894 rmmod nvme_fabrics 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2454030 ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2454030 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2454030 ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2454030 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2454030 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2454030' 00:13:43.894 killing process with pid 2454030 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2454030 00:13:43.894 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2454030 00:13:44.153 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.153 13:44:10 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:44.153 00:13:44.153 real 0m42.350s 00:13:44.153 user 1m7.975s 00:13:44.153 sys 0m10.508s 00:13:44.153 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.153 13:44:10 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.153 ************************************ 00:13:44.153 END TEST nvmf_lvs_grow 00:13:44.153 ************************************ 00:13:44.153 13:44:10 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:44.154 13:44:10 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:44.154 13:44:10 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.154 13:44:10 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.154 13:44:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:44.154 ************************************ 00:13:44.154 START TEST nvmf_bdev_io_wait 00:13:44.154 ************************************ 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:44.154 * Looking for test storage... 00:13:44.154 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.154 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.413 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.414 13:44:10 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:50.980 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:50.980 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:50.980 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:50.981 Found net devices under 0000:18:00.0: mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:50.981 Found net devices under 0000:18:00.1: mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:50.981 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:50.981 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:13:50.981 altname enp24s0f0np0 00:13:50.981 altname ens785f0np0 00:13:50.981 inet 192.168.100.8/24 scope global mlx_0_0 00:13:50.981 valid_lft forever preferred_lft forever 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:50.981 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:50.981 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:13:50.981 altname enp24s0f1np1 00:13:50.981 altname ens785f1np1 00:13:50.981 inet 192.168.100.9/24 scope global mlx_0_1 00:13:50.981 valid_lft forever preferred_lft forever 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:50.981 192.168.100.9' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:50.981 192.168.100.9' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:50.981 192.168.100.9' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:50.981 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:50.982 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:50.982 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:50.982 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2457537 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2457537 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2457537 ']' 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.241 13:44:17 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 [2024-07-15 13:44:17.585216] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:51.241 [2024-07-15 13:44:17.585279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.241 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.241 [2024-07-15 13:44:17.672948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.241 [2024-07-15 13:44:17.761151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.241 [2024-07-15 13:44:17.761194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.241 [2024-07-15 13:44:17.761203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.241 [2024-07-15 13:44:17.761211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.241 [2024-07-15 13:44:17.761218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.241 [2024-07-15 13:44:17.761333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.241 [2024-07-15 13:44:17.761433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.241 [2024-07-15 13:44:17.761521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.241 [2024-07-15 13:44:17.761522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.177 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.177 [2024-07-15 13:44:18.564623] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf8b230/0xf8f720) succeed. 00:13:52.177 [2024-07-15 13:44:18.574269] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf8c870/0xfd0db0) succeed. 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.437 Malloc0 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:52.437 [2024-07-15 13:44:18.765422] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2457740 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2457742 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.437 { 00:13:52.437 "params": { 00:13:52.437 "name": "Nvme$subsystem", 00:13:52.437 "trtype": "$TEST_TRANSPORT", 00:13:52.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.437 "adrfam": "ipv4", 00:13:52.437 "trsvcid": "$NVMF_PORT", 00:13:52.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.437 "hdgst": ${hdgst:-false}, 00:13:52.437 "ddgst": ${ddgst:-false} 00:13:52.437 }, 00:13:52.437 "method": "bdev_nvme_attach_controller" 00:13:52.437 } 00:13:52.437 EOF 00:13:52.437 )") 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2457744 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.437 { 00:13:52.437 "params": { 00:13:52.437 "name": "Nvme$subsystem", 00:13:52.437 "trtype": "$TEST_TRANSPORT", 00:13:52.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.437 "adrfam": "ipv4", 00:13:52.437 "trsvcid": "$NVMF_PORT", 00:13:52.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.437 "hdgst": ${hdgst:-false}, 00:13:52.437 "ddgst": ${ddgst:-false} 00:13:52.437 }, 00:13:52.437 "method": "bdev_nvme_attach_controller" 00:13:52.437 } 00:13:52.437 EOF 00:13:52.437 )") 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2457747 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.437 { 00:13:52.437 "params": { 00:13:52.437 "name": "Nvme$subsystem", 00:13:52.437 "trtype": "$TEST_TRANSPORT", 00:13:52.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.437 "adrfam": "ipv4", 00:13:52.437 "trsvcid": "$NVMF_PORT", 00:13:52.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.437 "hdgst": ${hdgst:-false}, 00:13:52.437 "ddgst": ${ddgst:-false} 00:13:52.437 }, 00:13:52.437 "method": "bdev_nvme_attach_controller" 00:13:52.437 } 00:13:52.437 EOF 00:13:52.437 )") 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.437 { 00:13:52.437 "params": { 00:13:52.437 "name": "Nvme$subsystem", 00:13:52.437 "trtype": "$TEST_TRANSPORT", 00:13:52.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.437 "adrfam": "ipv4", 00:13:52.437 "trsvcid": "$NVMF_PORT", 00:13:52.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.437 "hdgst": ${hdgst:-false}, 00:13:52.437 "ddgst": ${ddgst:-false} 00:13:52.437 }, 00:13:52.437 "method": "bdev_nvme_attach_controller" 00:13:52.437 } 00:13:52.437 EOF 00:13:52.437 )") 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2457740 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:52.437 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.438 "params": { 00:13:52.438 "name": "Nvme1", 00:13:52.438 "trtype": "rdma", 00:13:52.438 "traddr": "192.168.100.8", 00:13:52.438 "adrfam": "ipv4", 00:13:52.438 "trsvcid": "4420", 00:13:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.438 "hdgst": false, 00:13:52.438 "ddgst": false 00:13:52.438 }, 00:13:52.438 "method": "bdev_nvme_attach_controller" 00:13:52.438 }' 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.438 "params": { 00:13:52.438 "name": "Nvme1", 00:13:52.438 "trtype": "rdma", 00:13:52.438 "traddr": "192.168.100.8", 00:13:52.438 "adrfam": "ipv4", 00:13:52.438 "trsvcid": "4420", 00:13:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.438 "hdgst": false, 00:13:52.438 "ddgst": false 00:13:52.438 }, 00:13:52.438 "method": "bdev_nvme_attach_controller" 00:13:52.438 }' 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.438 "params": { 00:13:52.438 "name": "Nvme1", 00:13:52.438 "trtype": "rdma", 00:13:52.438 "traddr": "192.168.100.8", 00:13:52.438 "adrfam": "ipv4", 00:13:52.438 "trsvcid": "4420", 00:13:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.438 "hdgst": false, 00:13:52.438 "ddgst": false 00:13:52.438 }, 00:13:52.438 "method": "bdev_nvme_attach_controller" 00:13:52.438 }' 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:52.438 13:44:18 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.438 "params": { 00:13:52.438 "name": "Nvme1", 00:13:52.438 "trtype": "rdma", 00:13:52.438 "traddr": "192.168.100.8", 00:13:52.438 "adrfam": "ipv4", 00:13:52.438 "trsvcid": "4420", 00:13:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.438 "hdgst": false, 00:13:52.438 "ddgst": false 00:13:52.438 }, 00:13:52.438 "method": "bdev_nvme_attach_controller" 00:13:52.438 }' 00:13:52.438 [2024-07-15 13:44:18.819861] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.438 [2024-07-15 13:44:18.819928] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:52.438 [2024-07-15 13:44:18.820608] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.438 [2024-07-15 13:44:18.820663] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:52.438 [2024-07-15 13:44:18.820985] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.438 [2024-07-15 13:44:18.821036] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:52.438 [2024-07-15 13:44:18.824961] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:13:52.438 [2024-07-15 13:44:18.825019] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:52.438 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.697 [2024-07-15 13:44:19.026808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.697 [2024-07-15 13:44:19.109275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.697 [2024-07-15 13:44:19.122856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.697 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.697 [2024-07-15 13:44:19.204927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:52.697 [2024-07-15 13:44:19.221047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.955 [2024-07-15 13:44:19.283706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.955 [2024-07-15 13:44:19.314333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:52.955 [2024-07-15 13:44:19.364735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:52.955 Running I/O for 1 seconds... 00:13:52.955 Running I/O for 1 seconds... 00:13:52.955 Running I/O for 1 seconds... 00:13:53.214 Running I/O for 1 seconds... 00:13:54.151 00:13:54.151 Latency(us) 00:13:54.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.151 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:54.151 Nvme1n1 : 1.01 17689.64 69.10 0.00 0.00 7214.41 4302.58 12594.31 00:13:54.151 =================================================================================================================== 00:13:54.151 Total : 17689.64 69.10 0.00 0.00 7214.41 4302.58 12594.31 00:13:54.151 00:13:54.151 Latency(us) 00:13:54.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.151 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:54.151 Nvme1n1 : 1.01 14183.28 55.40 0.00 0.00 8994.79 5698.78 18464.06 00:13:54.151 =================================================================================================================== 00:13:54.151 Total : 14183.28 55.40 0.00 0.00 8994.79 5698.78 18464.06 00:13:54.151 00:13:54.151 Latency(us) 00:13:54.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.152 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:54.152 Nvme1n1 : 1.00 252161.97 985.01 0.00 0.00 506.02 206.58 1980.33 00:13:54.152 =================================================================================================================== 00:13:54.152 Total : 252161.97 985.01 0.00 0.00 506.02 206.58 1980.33 00:13:54.152 00:13:54.152 Latency(us) 00:13:54.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.152 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:54.152 Nvme1n1 : 1.00 18396.66 71.86 0.00 0.00 6942.54 3447.76 18350.08 00:13:54.152 =================================================================================================================== 00:13:54.152 Total : 18396.66 71.86 0.00 0.00 6942.54 3447.76 18350.08 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2457742 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2457744 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2457747 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:54.411 rmmod nvme_rdma 00:13:54.411 rmmod nvme_fabrics 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2457537 ']' 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2457537 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2457537 ']' 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2457537 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:54.411 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2457537 00:13:54.670 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:54.670 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:54.670 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2457537' 00:13:54.670 killing process with pid 2457537 00:13:54.670 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2457537 00:13:54.670 13:44:20 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2457537 00:13:54.929 13:44:21 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.929 13:44:21 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:54.929 00:13:54.929 real 0m10.688s 00:13:54.929 user 0m21.435s 00:13:54.929 sys 0m6.717s 00:13:54.929 13:44:21 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:54.929 13:44:21 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.929 ************************************ 00:13:54.929 END TEST nvmf_bdev_io_wait 00:13:54.929 ************************************ 00:13:54.929 13:44:21 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:54.929 13:44:21 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:54.929 13:44:21 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:54.929 13:44:21 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.929 13:44:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:54.929 ************************************ 00:13:54.929 START TEST nvmf_queue_depth 00:13:54.929 ************************************ 00:13:54.929 13:44:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:54.929 * Looking for test storage... 00:13:54.929 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:54.929 13:44:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.929 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:54.929 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.189 13:44:21 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:01.758 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.758 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:01.759 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:01.759 Found net devices under 0000:18:00.0: mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:01.759 Found net devices under 0000:18:00.1: mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:01.759 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.759 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:01.759 altname enp24s0f0np0 00:14:01.759 altname ens785f0np0 00:14:01.759 inet 192.168.100.8/24 scope global mlx_0_0 00:14:01.759 valid_lft forever preferred_lft forever 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:01.759 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.759 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:01.759 altname enp24s0f1np1 00:14:01.759 altname ens785f1np1 00:14:01.759 inet 192.168.100.9/24 scope global mlx_0_1 00:14:01.759 valid_lft forever preferred_lft forever 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.759 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:02.019 192.168.100.9' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:02.019 192.168.100.9' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:02.019 192.168.100.9' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2461031 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2461031 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2461031 ']' 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.019 13:44:28 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.019 [2024-07-15 13:44:28.397552] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:02.019 [2024-07-15 13:44:28.397620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.019 [2024-07-15 13:44:28.479869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.277 [2024-07-15 13:44:28.567424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.277 [2024-07-15 13:44:28.567464] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.277 [2024-07-15 13:44:28.567474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.277 [2024-07-15 13:44:28.567499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.277 [2024-07-15 13:44:28.567506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.277 [2024-07-15 13:44:28.567531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.845 [2024-07-15 13:44:29.283625] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8b0260/0x8b4750) succeed. 00:14:02.845 [2024-07-15 13:44:29.292637] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8b1760/0x8f5de0) succeed. 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.845 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.104 Malloc0 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:03.104 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.105 [2024-07-15 13:44:29.406389] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2461110 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2461110 /var/tmp/bdevperf.sock 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2461110 ']' 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.105 13:44:29 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:03.105 [2024-07-15 13:44:29.454711] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:03.105 [2024-07-15 13:44:29.454765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461110 ] 00:14:03.105 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.105 [2024-07-15 13:44:29.539305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.105 [2024-07-15 13:44:29.630399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:04.042 NVMe0n1 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.042 13:44:30 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:04.042 Running I/O for 10 seconds... 00:14:14.021 00:14:14.021 Latency(us) 00:14:14.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:14.021 Verification LBA range: start 0x0 length 0x4000 00:14:14.021 NVMe0n1 : 10.04 17742.13 69.31 0.00 0.00 57571.95 22111.28 36472.21 00:14:14.021 =================================================================================================================== 00:14:14.021 Total : 17742.13 69.31 0.00 0.00 57571.95 22111.28 36472.21 00:14:14.021 0 00:14:14.021 13:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2461110 00:14:14.021 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2461110 ']' 00:14:14.021 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2461110 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2461110 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2461110' 00:14:14.279 killing process with pid 2461110 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2461110 00:14:14.279 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.279 00:14:14.279 Latency(us) 00:14:14.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.279 =================================================================================================================== 00:14:14.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.279 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2461110 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:14.538 rmmod nvme_rdma 00:14:14.538 rmmod nvme_fabrics 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2461031 ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2461031 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2461031 ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2461031 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2461031 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2461031' 00:14:14.538 killing process with pid 2461031 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2461031 00:14:14.538 13:44:40 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2461031 00:14:14.797 13:44:41 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.797 13:44:41 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:14.797 00:14:14.797 real 0m19.850s 00:14:14.797 user 0m26.290s 00:14:14.797 sys 0m6.040s 00:14:14.797 13:44:41 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.797 13:44:41 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.797 ************************************ 00:14:14.797 END TEST nvmf_queue_depth 00:14:14.797 ************************************ 00:14:14.797 13:44:41 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:14.797 13:44:41 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:14.797 13:44:41 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:14.797 13:44:41 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.797 13:44:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:14.797 ************************************ 00:14:14.797 START TEST nvmf_target_multipath 00:14:14.797 ************************************ 00:14:14.797 13:44:41 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:15.056 * Looking for test storage... 00:14:15.056 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.056 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.057 13:44:41 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:21.627 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:21.627 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:21.627 Found net devices under 0000:18:00.0: mlx_0_0 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:21.627 Found net devices under 0000:18:00.1: mlx_0_1 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:21.627 13:44:47 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:21.627 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:21.628 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.628 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:21.628 altname enp24s0f0np0 00:14:21.628 altname ens785f0np0 00:14:21.628 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.628 valid_lft forever preferred_lft forever 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:21.628 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.628 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:21.628 altname enp24s0f1np1 00:14:21.628 altname ens785f1np1 00:14:21.628 inet 192.168.100.9/24 scope global mlx_0_1 00:14:21.628 valid_lft forever preferred_lft forever 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.628 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:21.888 192.168.100.9' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:21.888 192.168.100.9' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:21.888 192.168.100.9' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:14:21.888 run this test only with TCP transport for now 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:21.888 rmmod nvme_rdma 00:14:21.888 rmmod nvme_fabrics 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:21.888 00:14:21.888 real 0m7.040s 00:14:21.888 user 0m1.958s 00:14:21.888 sys 0m5.282s 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.888 13:44:48 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:21.888 ************************************ 00:14:21.888 END TEST nvmf_target_multipath 00:14:21.888 ************************************ 00:14:21.888 13:44:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:21.888 13:44:48 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:21.888 13:44:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.888 13:44:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.888 13:44:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:21.888 ************************************ 00:14:21.888 START TEST nvmf_zcopy 00:14:21.888 ************************************ 00:14:21.888 13:44:48 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:22.148 * Looking for test storage... 00:14:22.148 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.148 13:44:48 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:28.805 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:28.805 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:28.805 Found net devices under 0000:18:00.0: mlx_0_0 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:28.805 Found net devices under 0000:18:00.1: mlx_0_1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:28.805 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:28.805 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:28.805 altname enp24s0f0np0 00:14:28.805 altname ens785f0np0 00:14:28.805 inet 192.168.100.8/24 scope global mlx_0_0 00:14:28.805 valid_lft forever preferred_lft forever 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:28.805 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:28.806 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:28.806 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:28.806 altname enp24s0f1np1 00:14:28.806 altname ens785f1np1 00:14:28.806 inet 192.168.100.9/24 scope global mlx_0_1 00:14:28.806 valid_lft forever preferred_lft forever 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:28.806 192.168.100.9' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:28.806 192.168.100.9' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:28.806 192.168.100.9' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:28.806 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2468478 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2468478 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2468478 ']' 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.066 13:44:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:29.066 [2024-07-15 13:44:55.413503] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:29.066 [2024-07-15 13:44:55.413579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.066 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.066 [2024-07-15 13:44:55.501268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.066 [2024-07-15 13:44:55.589996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.066 [2024-07-15 13:44:55.590039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.066 [2024-07-15 13:44:55.590049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.066 [2024-07-15 13:44:55.590058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.066 [2024-07-15 13:44:55.590066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.066 [2024-07-15 13:44:55.590088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.001 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.001 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:30.001 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:14:30.002 Unsupported transport: rdma 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.002 nvmf_trace.0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:30.002 rmmod nvme_rdma 00:14:30.002 rmmod nvme_fabrics 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2468478 ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2468478 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2468478 ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2468478 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2468478 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2468478' 00:14:30.002 killing process with pid 2468478 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2468478 00:14:30.002 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2468478 00:14:30.260 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.260 13:44:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:30.260 00:14:30.260 real 0m8.187s 00:14:30.260 user 0m3.332s 00:14:30.260 sys 0m5.619s 00:14:30.260 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.260 13:44:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:30.260 ************************************ 00:14:30.260 END TEST nvmf_zcopy 00:14:30.260 ************************************ 00:14:30.260 13:44:56 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:30.260 13:44:56 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:30.260 13:44:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.260 13:44:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.260 13:44:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:30.260 ************************************ 00:14:30.260 START TEST nvmf_nmic 00:14:30.260 ************************************ 00:14:30.260 13:44:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:30.260 * Looking for test storage... 00:14:30.260 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:30.260 13:44:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.519 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:30.519 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.519 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.519 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.519 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.520 13:44:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:37.087 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:37.087 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:37.087 Found net devices under 0000:18:00.0: mlx_0_0 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:37.087 Found net devices under 0000:18:00.1: mlx_0_1 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:37.087 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:37.088 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:37.088 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:37.088 altname enp24s0f0np0 00:14:37.088 altname ens785f0np0 00:14:37.088 inet 192.168.100.8/24 scope global mlx_0_0 00:14:37.088 valid_lft forever preferred_lft forever 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:37.088 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:37.347 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:37.347 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:37.347 altname enp24s0f1np1 00:14:37.347 altname ens785f1np1 00:14:37.347 inet 192.168.100.9/24 scope global mlx_0_1 00:14:37.347 valid_lft forever preferred_lft forever 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:37.347 192.168.100.9' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:37.347 192.168.100.9' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:37.347 192.168.100.9' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:37.347 13:45:03 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2471701 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2471701 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2471701 ']' 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.348 13:45:03 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:37.348 [2024-07-15 13:45:03.790973] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:37.348 [2024-07-15 13:45:03.791033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.348 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.606 [2024-07-15 13:45:03.877461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.606 [2024-07-15 13:45:03.964231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.606 [2024-07-15 13:45:03.964275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.606 [2024-07-15 13:45:03.964285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.606 [2024-07-15 13:45:03.964294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.606 [2024-07-15 13:45:03.964301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.606 [2024-07-15 13:45:03.964367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.606 [2024-07-15 13:45:03.964493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.606 [2024-07-15 13:45:03.964624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.606 [2024-07-15 13:45:03.964624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.174 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.174 [2024-07-15 13:45:04.687368] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa69180/0xa6d670) succeed. 00:14:38.174 [2024-07-15 13:45:04.696850] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa6a7c0/0xaaed00) succeed. 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 Malloc0 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 [2024-07-15 13:45:04.873531] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:38.434 test case1: single bdev can't be used in multiple subsystems 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.434 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.434 [2024-07-15 13:45:04.897270] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:38.434 [2024-07-15 13:45:04.897294] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:38.434 [2024-07-15 13:45:04.897305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.434 request: 00:14:38.434 { 00:14:38.434 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:38.434 "namespace": { 00:14:38.434 "bdev_name": "Malloc0", 00:14:38.434 "no_auto_visible": false 00:14:38.434 }, 00:14:38.434 "method": "nvmf_subsystem_add_ns", 00:14:38.434 "req_id": 1 00:14:38.434 } 00:14:38.434 Got JSON-RPC error response 00:14:38.434 response: 00:14:38.434 { 00:14:38.434 "code": -32602, 00:14:38.435 "message": "Invalid parameters" 00:14:38.435 } 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:38.435 Adding namespace failed - expected result. 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:38.435 test case2: host connect to nvmf target in multiple paths 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:38.435 [2024-07-15 13:45:04.913337] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.435 13:45:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:39.810 13:45:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:14:40.376 13:45:06 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.376 13:45:06 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:40.376 13:45:06 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.376 13:45:06 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:40.376 13:45:06 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:42.907 13:45:08 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:42.907 [global] 00:14:42.907 thread=1 00:14:42.907 invalidate=1 00:14:42.908 rw=write 00:14:42.908 time_based=1 00:14:42.908 runtime=1 00:14:42.908 ioengine=libaio 00:14:42.908 direct=1 00:14:42.908 bs=4096 00:14:42.908 iodepth=1 00:14:42.908 norandommap=0 00:14:42.908 numjobs=1 00:14:42.908 00:14:42.908 verify_dump=1 00:14:42.908 verify_backlog=512 00:14:42.908 verify_state_save=0 00:14:42.908 do_verify=1 00:14:42.908 verify=crc32c-intel 00:14:42.908 [job0] 00:14:42.908 filename=/dev/nvme0n1 00:14:42.908 Could not set queue depth (nvme0n1) 00:14:42.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.908 fio-3.35 00:14:42.908 Starting 1 thread 00:14:43.842 00:14:43.842 job0: (groupid=0, jobs=1): err= 0: pid=2472932: Mon Jul 15 13:45:10 2024 00:14:43.842 read: IOPS=6823, BW=26.7MiB/s (27.9MB/s)(26.7MiB/1001msec) 00:14:43.842 slat (nsec): min=8273, max=38576, avg=9101.30, stdev=1002.50 00:14:43.842 clat (usec): min=46, max=169, avg=60.57, stdev= 4.25 00:14:43.842 lat (usec): min=60, max=179, avg=69.67, stdev= 4.44 00:14:43.842 clat percentiles (usec): 00:14:43.842 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 58], 00:14:43.842 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:14:43.842 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 68], 00:14:43.842 | 99.00th=[ 73], 99.50th=[ 76], 99.90th=[ 84], 99.95th=[ 111], 00:14:43.842 | 99.99th=[ 169] 00:14:43.842 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:14:43.842 slat (nsec): min=8845, max=54235, avg=11129.23, stdev=1551.60 00:14:43.842 clat (usec): min=42, max=629, avg=57.96, stdev=11.11 00:14:43.842 lat (usec): min=58, max=660, avg=69.09, stdev=11.49 00:14:43.842 clat percentiles (usec): 00:14:43.842 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:14:43.842 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:14:43.842 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 65], 00:14:43.842 | 99.00th=[ 71], 99.50th=[ 75], 99.90th=[ 103], 99.95th=[ 192], 00:14:43.842 | 99.99th=[ 627] 00:14:43.842 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:14:43.842 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:14:43.842 lat (usec) : 50=0.29%, 100=99.62%, 250=0.06%, 500=0.01%, 750=0.01% 00:14:43.842 cpu : usr=8.60%, sys=15.90%, ctx=13998, majf=0, minf=1 00:14:43.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:43.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.842 issued rwts: total=6830,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:43.842 00:14:43.842 Run status group 0 (all jobs): 00:14:43.843 READ: bw=26.7MiB/s (27.9MB/s), 26.7MiB/s-26.7MiB/s (27.9MB/s-27.9MB/s), io=26.7MiB (28.0MB), run=1001-1001msec 00:14:43.843 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:14:43.843 00:14:43.843 Disk stats (read/write): 00:14:43.843 nvme0n1: ios=6194/6411, merge=0/0, ticks=340/329, in_queue=669, util=90.68% 00:14:43.843 13:45:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:45.773 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:46.032 rmmod nvme_rdma 00:14:46.032 rmmod nvme_fabrics 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2471701 ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2471701 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2471701 ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2471701 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2471701 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2471701' 00:14:46.032 killing process with pid 2471701 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2471701 00:14:46.032 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2471701 00:14:46.291 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.291 13:45:12 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:46.291 00:14:46.291 real 0m16.048s 00:14:46.291 user 0m39.358s 00:14:46.291 sys 0m6.296s 00:14:46.291 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.291 13:45:12 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:46.291 ************************************ 00:14:46.291 END TEST nvmf_nmic 00:14:46.291 ************************************ 00:14:46.291 13:45:12 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:46.291 13:45:12 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:46.291 13:45:12 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.291 13:45:12 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.292 13:45:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:46.292 ************************************ 00:14:46.292 START TEST nvmf_fio_target 00:14:46.292 ************************************ 00:14:46.292 13:45:12 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:46.551 * Looking for test storage... 00:14:46.551 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.551 13:45:12 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.120 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:53.121 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:53.121 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:53.121 Found net devices under 0000:18:00.0: mlx_0_0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:53.121 Found net devices under 0000:18:00.1: mlx_0_1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:53.121 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:53.121 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:14:53.121 altname enp24s0f0np0 00:14:53.121 altname ens785f0np0 00:14:53.121 inet 192.168.100.8/24 scope global mlx_0_0 00:14:53.121 valid_lft forever preferred_lft forever 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:53.121 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:53.121 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:14:53.121 altname enp24s0f1np1 00:14:53.121 altname ens785f1np1 00:14:53.121 inet 192.168.100.9/24 scope global mlx_0_1 00:14:53.121 valid_lft forever preferred_lft forever 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:53.121 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:53.380 192.168.100.9' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:53.380 192.168.100.9' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:53.380 192.168.100.9' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2476238 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2476238 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2476238 ']' 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.380 13:45:19 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.380 [2024-07-15 13:45:19.807862] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:14:53.380 [2024-07-15 13:45:19.807926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.380 [2024-07-15 13:45:19.895812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.638 [2024-07-15 13:45:19.996169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.638 [2024-07-15 13:45:19.996209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.638 [2024-07-15 13:45:19.996220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.638 [2024-07-15 13:45:19.996229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.638 [2024-07-15 13:45:19.996237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.638 [2024-07-15 13:45:19.996312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.638 [2024-07-15 13:45:19.996344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.638 [2024-07-15 13:45:19.996459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.638 [2024-07-15 13:45:19.996460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.204 13:45:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:54.462 [2024-07-15 13:45:20.845188] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b5b180/0x1b5f670) succeed. 00:14:54.462 [2024-07-15 13:45:20.854752] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b5c7c0/0x1ba0d00) succeed. 00:14:54.721 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.721 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:54.721 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.980 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:54.980 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.237 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:55.237 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.496 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:55.496 13:45:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:55.754 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.754 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:55.754 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.013 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:56.013 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.271 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:56.271 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:56.528 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:56.528 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:56.528 13:45:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.789 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:56.789 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:57.047 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:57.047 [2024-07-15 13:45:23.545495] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:57.305 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:57.305 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:57.564 13:45:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:58.592 13:45:24 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:00.495 13:45:26 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:00.495 [global] 00:15:00.495 thread=1 00:15:00.495 invalidate=1 00:15:00.495 rw=write 00:15:00.495 time_based=1 00:15:00.495 runtime=1 00:15:00.495 ioengine=libaio 00:15:00.495 direct=1 00:15:00.495 bs=4096 00:15:00.495 iodepth=1 00:15:00.495 norandommap=0 00:15:00.495 numjobs=1 00:15:00.495 00:15:00.495 verify_dump=1 00:15:00.495 verify_backlog=512 00:15:00.495 verify_state_save=0 00:15:00.495 do_verify=1 00:15:00.495 verify=crc32c-intel 00:15:00.495 [job0] 00:15:00.495 filename=/dev/nvme0n1 00:15:00.495 [job1] 00:15:00.495 filename=/dev/nvme0n2 00:15:00.495 [job2] 00:15:00.495 filename=/dev/nvme0n3 00:15:00.495 [job3] 00:15:00.495 filename=/dev/nvme0n4 00:15:00.752 Could not set queue depth (nvme0n1) 00:15:00.752 Could not set queue depth (nvme0n2) 00:15:00.752 Could not set queue depth (nvme0n3) 00:15:00.752 Could not set queue depth (nvme0n4) 00:15:01.009 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:01.009 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:01.009 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:01.009 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:01.009 fio-3.35 00:15:01.009 Starting 4 threads 00:15:02.381 00:15:02.381 job0: (groupid=0, jobs=1): err= 0: pid=2477475: Mon Jul 15 13:45:28 2024 00:15:02.381 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:02.381 slat (nsec): min=8272, max=31253, avg=9282.13, stdev=1154.07 00:15:02.381 clat (usec): min=73, max=238, avg=153.41, stdev=25.00 00:15:02.381 lat (usec): min=82, max=249, avg=162.69, stdev=25.18 00:15:02.381 clat percentiles (usec): 00:15:02.381 | 1.00th=[ 91], 5.00th=[ 115], 10.00th=[ 126], 20.00th=[ 133], 00:15:02.381 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 155], 60.00th=[ 165], 00:15:02.381 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:15:02.381 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 233], 99.95th=[ 239], 00:15:02.381 | 99.99th=[ 239] 00:15:02.381 write: IOPS=3089, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:15:02.381 slat (nsec): min=10499, max=44975, avg=11563.75, stdev=1481.16 00:15:02.381 clat (usec): min=65, max=223, avg=145.79, stdev=23.96 00:15:02.381 lat (usec): min=77, max=234, avg=157.36, stdev=24.13 00:15:02.381 clat percentiles (usec): 00:15:02.381 | 1.00th=[ 82], 5.00th=[ 106], 10.00th=[ 117], 20.00th=[ 125], 00:15:02.381 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 151], 60.00th=[ 157], 00:15:02.381 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:15:02.381 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 219], 99.95th=[ 221], 00:15:02.381 | 99.99th=[ 223] 00:15:02.381 bw ( KiB/s): min=12288, max=12288, per=19.99%, avg=12288.00, stdev= 0.00, samples=1 00:15:02.381 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:02.381 lat (usec) : 100=3.47%, 250=96.53% 00:15:02.381 cpu : usr=4.20%, sys=6.30%, ctx=6165, majf=0, minf=1 00:15:02.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:02.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.381 issued rwts: total=3072,3093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:02.381 job1: (groupid=0, jobs=1): err= 0: pid=2477476: Mon Jul 15 13:45:28 2024 00:15:02.381 read: IOPS=5541, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1001msec) 00:15:02.381 slat (nsec): min=8314, max=30468, avg=9098.71, stdev=1085.60 00:15:02.381 clat (usec): min=57, max=167, avg=79.95, stdev=10.56 00:15:02.381 lat (usec): min=71, max=175, avg=89.05, stdev=10.62 00:15:02.381 clat percentiles (usec): 00:15:02.381 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:15:02.381 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:15:02.381 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 106], 00:15:02.381 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 135], 99.95th=[ 139], 00:15:02.381 | 99.99th=[ 167] 00:15:02.381 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:15:02.381 slat (nsec): min=10395, max=43522, avg=11265.52, stdev=1505.42 00:15:02.381 clat (usec): min=60, max=170, avg=74.34, stdev= 6.27 00:15:02.381 lat (usec): min=71, max=181, avg=85.61, stdev= 6.48 00:15:02.381 clat percentiles (usec): 00:15:02.381 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:15:02.381 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 76], 00:15:02.381 | 70.00th=[ 77], 80.00th=[ 79], 90.00th=[ 81], 95.00th=[ 84], 00:15:02.381 | 99.00th=[ 99], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 147], 00:15:02.381 | 99.99th=[ 172] 00:15:02.381 bw ( KiB/s): min=24576, max=24576, per=39.99%, avg=24576.00, stdev= 0.00, samples=1 00:15:02.381 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:15:02.381 lat (usec) : 100=96.83%, 250=3.17% 00:15:02.381 cpu : usr=7.30%, sys=11.30%, ctx=11179, majf=0, minf=1 00:15:02.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:02.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.381 issued rwts: total=5547,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:02.381 job2: (groupid=0, jobs=1): err= 0: pid=2477477: Mon Jul 15 13:45:28 2024 00:15:02.381 read: IOPS=3289, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:15:02.381 slat (nsec): min=8426, max=31743, avg=9630.68, stdev=1227.53 00:15:02.381 clat (usec): min=78, max=242, avg=134.99, stdev=38.79 00:15:02.381 lat (usec): min=87, max=253, avg=144.62, stdev=39.10 00:15:02.381 clat percentiles (usec): 00:15:02.381 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:15:02.381 | 30.00th=[ 100], 40.00th=[ 105], 50.00th=[ 122], 60.00th=[ 161], 00:15:02.381 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:15:02.381 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 241], 99.95th=[ 241], 00:15:02.381 | 99.99th=[ 243] 00:15:02.382 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:02.382 slat (nsec): min=10530, max=39956, avg=11862.74, stdev=1555.79 00:15:02.382 clat (usec): min=71, max=238, avg=130.01, stdev=36.11 00:15:02.382 lat (usec): min=82, max=251, avg=141.87, stdev=36.57 00:15:02.382 clat percentiles (usec): 00:15:02.382 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 91], 00:15:02.382 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 139], 60.00th=[ 153], 00:15:02.382 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 180], 00:15:02.382 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 229], 99.95th=[ 235], 00:15:02.382 | 99.99th=[ 239] 00:15:02.382 bw ( KiB/s): min=12288, max=12288, per=19.99%, avg=12288.00, stdev= 0.00, samples=1 00:15:02.382 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:02.382 lat (usec) : 100=34.14%, 250=65.86% 00:15:02.382 cpu : usr=3.90%, sys=8.20%, ctx=6877, majf=0, minf=1 00:15:02.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:02.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.382 issued rwts: total=3293,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:02.382 job3: (groupid=0, jobs=1): err= 0: pid=2477478: Mon Jul 15 13:45:28 2024 00:15:02.382 read: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:15:02.382 slat (nsec): min=8594, max=62729, avg=9918.18, stdev=1472.70 00:15:02.382 clat (usec): min=80, max=246, avg=155.47, stdev=25.73 00:15:02.382 lat (usec): min=89, max=257, avg=165.38, stdev=26.12 00:15:02.382 clat percentiles (usec): 00:15:02.382 | 1.00th=[ 99], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 133], 00:15:02.382 | 30.00th=[ 137], 40.00th=[ 145], 50.00th=[ 159], 60.00th=[ 165], 00:15:02.382 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 194], 00:15:02.382 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 245], 99.95th=[ 247], 00:15:02.382 | 99.99th=[ 247] 00:15:02.382 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:02.382 slat (nsec): min=10792, max=42734, avg=12255.88, stdev=1380.69 00:15:02.382 clat (usec): min=70, max=235, avg=148.28, stdev=24.43 00:15:02.382 lat (usec): min=82, max=248, avg=160.54, stdev=24.77 00:15:02.382 clat percentiles (usec): 00:15:02.382 | 1.00th=[ 91], 5.00th=[ 113], 10.00th=[ 120], 20.00th=[ 127], 00:15:02.382 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 153], 60.00th=[ 157], 00:15:02.382 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 188], 00:15:02.382 | 99.00th=[ 217], 99.50th=[ 221], 99.90th=[ 229], 99.95th=[ 233], 00:15:02.382 | 99.99th=[ 235] 00:15:02.382 bw ( KiB/s): min=12288, max=12288, per=19.99%, avg=12288.00, stdev= 0.00, samples=1 00:15:02.382 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:02.382 lat (usec) : 100=1.92%, 250=98.08% 00:15:02.382 cpu : usr=3.70%, sys=7.30%, ctx=6044, majf=0, minf=1 00:15:02.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:02.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.382 issued rwts: total=2970,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:02.382 00:15:02.382 Run status group 0 (all jobs): 00:15:02.382 READ: bw=58.1MiB/s (60.9MB/s), 11.6MiB/s-21.6MiB/s (12.2MB/s-22.7MB/s), io=58.1MiB (61.0MB), run=1001-1001msec 00:15:02.382 WRITE: bw=60.0MiB/s (62.9MB/s), 12.0MiB/s-22.0MiB/s (12.6MB/s-23.0MB/s), io=60.1MiB (63.0MB), run=1001-1001msec 00:15:02.382 00:15:02.382 Disk stats (read/write): 00:15:02.382 nvme0n1: ios=2453/2560, merge=0/0, ticks=382/361, in_queue=743, util=83.87% 00:15:02.382 nvme0n2: ios=4608/4659, merge=0/0, ticks=336/317, in_queue=653, util=84.87% 00:15:02.382 nvme0n3: ios=2560/2716, merge=0/0, ticks=354/369, in_queue=723, util=88.20% 00:15:02.382 nvme0n4: ios=2281/2560, merge=0/0, ticks=369/373, in_queue=742, util=89.44% 00:15:02.382 13:45:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:02.382 [global] 00:15:02.382 thread=1 00:15:02.382 invalidate=1 00:15:02.382 rw=randwrite 00:15:02.382 time_based=1 00:15:02.382 runtime=1 00:15:02.382 ioengine=libaio 00:15:02.382 direct=1 00:15:02.382 bs=4096 00:15:02.382 iodepth=1 00:15:02.382 norandommap=0 00:15:02.382 numjobs=1 00:15:02.382 00:15:02.382 verify_dump=1 00:15:02.382 verify_backlog=512 00:15:02.382 verify_state_save=0 00:15:02.382 do_verify=1 00:15:02.382 verify=crc32c-intel 00:15:02.382 [job0] 00:15:02.382 filename=/dev/nvme0n1 00:15:02.382 [job1] 00:15:02.382 filename=/dev/nvme0n2 00:15:02.382 [job2] 00:15:02.382 filename=/dev/nvme0n3 00:15:02.382 [job3] 00:15:02.382 filename=/dev/nvme0n4 00:15:02.382 Could not set queue depth (nvme0n1) 00:15:02.382 Could not set queue depth (nvme0n2) 00:15:02.382 Could not set queue depth (nvme0n3) 00:15:02.382 Could not set queue depth (nvme0n4) 00:15:02.382 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.382 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.382 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.382 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.382 fio-3.35 00:15:02.382 Starting 4 threads 00:15:03.748 00:15:03.748 job0: (groupid=0, jobs=1): err= 0: pid=2477777: Mon Jul 15 13:45:30 2024 00:15:03.748 read: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec) 00:15:03.748 slat (nsec): min=8106, max=30353, avg=9197.23, stdev=1127.71 00:15:03.748 clat (usec): min=67, max=199, avg=109.03, stdev=28.29 00:15:03.748 lat (usec): min=76, max=208, avg=118.23, stdev=28.31 00:15:03.748 clat percentiles (usec): 00:15:03.748 | 1.00th=[ 75], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:15:03.748 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 122], 00:15:03.748 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:15:03.748 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 186], 00:15:03.748 | 99.99th=[ 200] 00:15:03.748 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(17.8MiB/1000msec); 0 zone resets 00:15:03.748 slat (nsec): min=10251, max=67362, avg=11114.13, stdev=1824.48 00:15:03.748 clat (usec): min=55, max=386, avg=98.03, stdev=28.80 00:15:03.748 lat (usec): min=73, max=398, avg=109.14, stdev=29.09 00:15:03.748 clat percentiles (usec): 00:15:03.748 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 78], 00:15:03.748 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 89], 00:15:03.748 | 70.00th=[ 106], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:15:03.748 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 302], 99.95th=[ 322], 00:15:03.748 | 99.99th=[ 388] 00:15:03.748 bw ( KiB/s): min=20480, max=20480, per=30.92%, avg=20480.00, stdev= 0.00, samples=1 00:15:03.748 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:03.748 lat (usec) : 100=62.39%, 250=37.50%, 500=0.10% 00:15:03.748 cpu : usr=4.80%, sys=9.70%, ctx=8654, majf=0, minf=1 00:15:03.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.748 issued rwts: total=4096,4557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.748 job1: (groupid=0, jobs=1): err= 0: pid=2477778: Mon Jul 15 13:45:30 2024 00:15:03.748 read: IOPS=3365, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:15:03.748 slat (nsec): min=4111, max=42681, avg=9116.08, stdev=1940.51 00:15:03.748 clat (usec): min=55, max=3495, avg=133.27, stdev=62.06 00:15:03.748 lat (usec): min=59, max=3517, avg=142.38, stdev=62.40 00:15:03.748 clat percentiles (usec): 00:15:03.748 | 1.00th=[ 77], 5.00th=[ 83], 10.00th=[ 89], 20.00th=[ 121], 00:15:03.748 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:15:03.748 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:15:03.748 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 202], 99.95th=[ 208], 00:15:03.748 | 99.99th=[ 3490] 00:15:03.748 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:03.748 slat (nsec): min=9135, max=35749, avg=11481.23, stdev=1344.30 00:15:03.748 clat (usec): min=57, max=423, avg=129.53, stdev=20.83 00:15:03.748 lat (usec): min=74, max=435, avg=141.01, stdev=20.83 00:15:03.748 clat percentiles (usec): 00:15:03.748 | 1.00th=[ 73], 5.00th=[ 81], 10.00th=[ 106], 20.00th=[ 120], 00:15:03.748 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:15:03.748 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 155], 00:15:03.749 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 247], 00:15:03.749 | 99.99th=[ 424] 00:15:03.749 bw ( KiB/s): min=14792, max=14792, per=22.33%, avg=14792.00, stdev= 0.00, samples=1 00:15:03.749 iops : min= 3698, max= 3698, avg=3698.00, stdev= 0.00, samples=1 00:15:03.749 lat (usec) : 100=10.37%, 250=89.60%, 500=0.01% 00:15:03.749 lat (msec) : 4=0.01% 00:15:03.749 cpu : usr=3.50%, sys=8.40%, ctx=6953, majf=0, minf=1 00:15:03.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 issued rwts: total=3369,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.749 job2: (groupid=0, jobs=1): err= 0: pid=2477779: Mon Jul 15 13:45:30 2024 00:15:03.749 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:15:03.749 slat (nsec): min=8581, max=49605, avg=9436.41, stdev=1446.30 00:15:03.749 clat (usec): min=71, max=260, avg=94.98, stdev=15.66 00:15:03.749 lat (usec): min=82, max=270, avg=104.42, stdev=15.85 00:15:03.749 clat percentiles (usec): 00:15:03.749 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:15:03.749 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 93], 00:15:03.749 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 104], 95.00th=[ 133], 00:15:03.749 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 225], 99.95th=[ 251], 00:15:03.749 | 99.99th=[ 262] 00:15:03.749 write: IOPS=4848, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 00:15:03.749 slat (nsec): min=10662, max=40897, avg=11471.53, stdev=1617.25 00:15:03.749 clat (usec): min=70, max=422, avg=91.20, stdev=18.00 00:15:03.749 lat (usec): min=82, max=433, avg=102.67, stdev=18.40 00:15:03.749 clat percentiles (usec): 00:15:03.749 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:15:03.749 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:15:03.749 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 117], 00:15:03.749 | 99.00th=[ 172], 99.50th=[ 200], 99.90th=[ 330], 99.95th=[ 347], 00:15:03.749 | 99.99th=[ 424] 00:15:03.749 bw ( KiB/s): min=20480, max=20480, per=30.92%, avg=20480.00, stdev= 0.00, samples=1 00:15:03.749 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:03.749 lat (usec) : 100=86.29%, 250=13.58%, 500=0.13% 00:15:03.749 cpu : usr=4.90%, sys=11.20%, ctx=9462, majf=0, minf=1 00:15:03.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 issued rwts: total=4608,4853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.749 job3: (groupid=0, jobs=1): err= 0: pid=2477780: Mon Jul 15 13:45:30 2024 00:15:03.749 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:15:03.749 slat (nsec): min=8711, max=25631, avg=9690.94, stdev=1145.56 00:15:03.749 clat (usec): min=73, max=366, avg=138.06, stdev=14.48 00:15:03.749 lat (usec): min=95, max=375, avg=147.75, stdev=14.51 00:15:03.749 clat percentiles (usec): 00:15:03.749 | 1.00th=[ 102], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 128], 00:15:03.749 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:15:03.749 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:15:03.749 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 273], 00:15:03.749 | 99.99th=[ 367] 00:15:03.749 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:03.749 slat (nsec): min=10581, max=55124, avg=11786.52, stdev=2088.46 00:15:03.749 clat (usec): min=74, max=430, avg=133.36, stdev=18.95 00:15:03.749 lat (usec): min=86, max=442, avg=145.15, stdev=19.28 00:15:03.749 clat percentiles (usec): 00:15:03.749 | 1.00th=[ 93], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 123], 00:15:03.749 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:15:03.749 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 159], 00:15:03.749 | 99.00th=[ 188], 99.50th=[ 215], 99.90th=[ 359], 99.95th=[ 420], 00:15:03.749 | 99.99th=[ 433] 00:15:03.749 bw ( KiB/s): min=14776, max=14776, per=22.30%, avg=14776.00, stdev= 0.00, samples=1 00:15:03.749 iops : min= 3694, max= 3694, avg=3694.00, stdev= 0.00, samples=1 00:15:03.749 lat (usec) : 100=1.29%, 250=98.50%, 500=0.21% 00:15:03.749 cpu : usr=4.30%, sys=7.50%, ctx=6732, majf=0, minf=1 00:15:03.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.749 issued rwts: total=3148,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.749 00:15:03.749 Run status group 0 (all jobs): 00:15:03.749 READ: bw=59.4MiB/s (62.3MB/s), 12.3MiB/s-18.0MiB/s (12.9MB/s-18.9MB/s), io=59.5MiB (62.3MB), run=1000-1001msec 00:15:03.749 WRITE: bw=64.7MiB/s (67.8MB/s), 14.0MiB/s-18.9MiB/s (14.7MB/s-19.9MB/s), io=64.8MiB (67.9MB), run=1000-1001msec 00:15:03.749 00:15:03.749 Disk stats (read/write): 00:15:03.749 nvme0n1: ios=3634/4069, merge=0/0, ticks=376/342, in_queue=718, util=86.07% 00:15:03.749 nvme0n2: ios=2836/3072, merge=0/0, ticks=362/386, in_queue=748, util=86.57% 00:15:03.749 nvme0n3: ios=4074/4096, merge=0/0, ticks=362/337, in_queue=699, util=88.92% 00:15:03.749 nvme0n4: ios=2675/3072, merge=0/0, ticks=354/386, in_queue=740, util=89.67% 00:15:03.749 13:45:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:03.749 [global] 00:15:03.749 thread=1 00:15:03.749 invalidate=1 00:15:03.749 rw=write 00:15:03.749 time_based=1 00:15:03.749 runtime=1 00:15:03.749 ioengine=libaio 00:15:03.749 direct=1 00:15:03.749 bs=4096 00:15:03.749 iodepth=128 00:15:03.749 norandommap=0 00:15:03.749 numjobs=1 00:15:03.749 00:15:03.749 verify_dump=1 00:15:03.749 verify_backlog=512 00:15:03.749 verify_state_save=0 00:15:03.749 do_verify=1 00:15:03.749 verify=crc32c-intel 00:15:03.749 [job0] 00:15:03.749 filename=/dev/nvme0n1 00:15:03.749 [job1] 00:15:03.749 filename=/dev/nvme0n2 00:15:03.749 [job2] 00:15:03.749 filename=/dev/nvme0n3 00:15:03.749 [job3] 00:15:03.749 filename=/dev/nvme0n4 00:15:03.749 Could not set queue depth (nvme0n1) 00:15:03.749 Could not set queue depth (nvme0n2) 00:15:03.749 Could not set queue depth (nvme0n3) 00:15:03.749 Could not set queue depth (nvme0n4) 00:15:04.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.005 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.005 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.005 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.005 fio-3.35 00:15:04.005 Starting 4 threads 00:15:05.374 00:15:05.374 job0: (groupid=0, jobs=1): err= 0: pid=2478080: Mon Jul 15 13:45:31 2024 00:15:05.374 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:15:05.374 slat (usec): min=2, max=5277, avg=64.08, stdev=308.80 00:15:05.374 clat (usec): min=2507, max=17014, avg=8433.43, stdev=2739.62 00:15:05.374 lat (usec): min=2614, max=17669, avg=8497.52, stdev=2754.09 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5866], 00:15:05.374 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 7898], 60.00th=[ 8717], 00:15:05.374 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[12256], 95.00th=[13435], 00:15:05.374 | 99.00th=[15008], 99.50th=[15926], 99.90th=[16909], 99.95th=[16909], 00:15:05.374 | 99.99th=[16909] 00:15:05.374 write: IOPS=7697, BW=30.1MiB/s (31.5MB/s)(30.1MiB/1002msec); 0 zone resets 00:15:05.374 slat (usec): min=2, max=5128, avg=62.08, stdev=308.65 00:15:05.374 clat (usec): min=1142, max=19146, avg=8057.13, stdev=3227.21 00:15:05.374 lat (usec): min=2119, max=19149, avg=8119.21, stdev=3243.68 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 3523], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5407], 00:15:05.374 | 30.00th=[ 5866], 40.00th=[ 6652], 50.00th=[ 7242], 60.00th=[ 8225], 00:15:05.374 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[12911], 95.00th=[15008], 00:15:05.374 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:15:05.374 | 99.99th=[19268] 00:15:05.374 bw ( KiB/s): min=28672, max=32768, per=29.66%, avg=30720.00, stdev=2896.31, samples=2 00:15:05.374 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:15:05.374 lat (msec) : 2=0.01%, 4=2.09%, 10=71.97%, 20=25.93% 00:15:05.374 cpu : usr=3.30%, sys=6.39%, ctx=1375, majf=0, minf=1 00:15:05.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:05.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.374 issued rwts: total=7680,7713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.374 job1: (groupid=0, jobs=1): err= 0: pid=2478081: Mon Jul 15 13:45:31 2024 00:15:05.374 read: IOPS=7019, BW=27.4MiB/s (28.8MB/s)(27.5MiB/1002msec) 00:15:05.374 slat (usec): min=2, max=6713, avg=69.68, stdev=353.97 00:15:05.374 clat (usec): min=1647, max=20755, avg=9083.39, stdev=3162.84 00:15:05.374 lat (usec): min=3502, max=20964, avg=9153.08, stdev=3177.52 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 4228], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5997], 00:15:05.374 | 30.00th=[ 6849], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9765], 00:15:05.374 | 70.00th=[10814], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:15:05.374 | 99.00th=[16909], 99.50th=[17695], 99.90th=[20841], 99.95th=[20841], 00:15:05.374 | 99.99th=[20841] 00:15:05.374 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:15:05.374 slat (usec): min=2, max=6489, avg=67.06, stdev=337.60 00:15:05.374 clat (usec): min=1331, max=22178, avg=8788.43, stdev=3443.97 00:15:05.374 lat (usec): min=1340, max=22181, avg=8855.49, stdev=3461.68 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 3752], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5735], 00:15:05.374 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 8094], 60.00th=[ 9372], 00:15:05.374 | 70.00th=[10552], 80.00th=[11863], 90.00th=[13566], 95.00th=[14484], 00:15:05.374 | 99.00th=[18482], 99.50th=[20317], 99.90th=[22152], 99.95th=[22152], 00:15:05.374 | 99.99th=[22152] 00:15:05.374 bw ( KiB/s): min=26840, max=30504, per=27.68%, avg=28672.00, stdev=2590.84, samples=2 00:15:05.374 iops : min= 6710, max= 7626, avg=7168.00, stdev=647.71, samples=2 00:15:05.374 lat (msec) : 2=0.10%, 4=1.17%, 10=61.60%, 20=36.74%, 50=0.39% 00:15:05.374 cpu : usr=3.30%, sys=5.69%, ctx=1519, majf=0, minf=1 00:15:05.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:05.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.374 issued rwts: total=7034,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.374 job2: (groupid=0, jobs=1): err= 0: pid=2478082: Mon Jul 15 13:45:31 2024 00:15:05.374 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:15:05.374 slat (usec): min=2, max=6904, avg=84.72, stdev=418.83 00:15:05.374 clat (usec): min=3595, max=21047, avg=10935.47, stdev=3302.12 00:15:05.374 lat (usec): min=3598, max=21849, avg=11020.19, stdev=3318.64 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 5145], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8094], 00:15:05.374 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11469], 00:15:05.374 | 70.00th=[12256], 80.00th=[13435], 90.00th=[15533], 95.00th=[17695], 00:15:05.374 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:15:05.374 | 99.99th=[21103] 00:15:05.374 write: IOPS=5911, BW=23.1MiB/s (24.2MB/s)(23.1MiB/1002msec); 0 zone resets 00:15:05.374 slat (usec): min=2, max=5463, avg=84.71, stdev=386.50 00:15:05.374 clat (usec): min=1634, max=20529, avg=10959.37, stdev=3515.66 00:15:05.374 lat (usec): min=1644, max=21056, avg=11044.07, stdev=3525.69 00:15:05.374 clat percentiles (usec): 00:15:05.374 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 8029], 00:15:05.374 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10552], 60.00th=[11469], 00:15:05.374 | 70.00th=[12649], 80.00th=[14222], 90.00th=[16057], 95.00th=[17171], 00:15:05.374 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:15:05.374 | 99.99th=[20579] 00:15:05.374 bw ( KiB/s): min=22704, max=23664, per=22.38%, avg=23184.00, stdev=678.82, samples=2 00:15:05.374 iops : min= 5676, max= 5916, avg=5796.00, stdev=169.71, samples=2 00:15:05.374 lat (msec) : 2=0.03%, 4=0.44%, 10=44.25%, 20=54.73%, 50=0.55% 00:15:05.374 cpu : usr=2.60%, sys=4.70%, ctx=1363, majf=0, minf=1 00:15:05.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:05.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.375 issued rwts: total=5632,5923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.375 job3: (groupid=0, jobs=1): err= 0: pid=2478083: Mon Jul 15 13:45:31 2024 00:15:05.375 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:15:05.375 slat (usec): min=2, max=4908, avg=92.22, stdev=402.30 00:15:05.375 clat (usec): min=3844, max=23041, avg=11695.09, stdev=4219.44 00:15:05.375 lat (usec): min=3847, max=24919, avg=11787.31, stdev=4244.06 00:15:05.375 clat percentiles (usec): 00:15:05.375 | 1.00th=[ 4883], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8225], 00:15:05.375 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11731], 00:15:05.375 | 70.00th=[13304], 80.00th=[14746], 90.00th=[19006], 95.00th=[20317], 00:15:05.375 | 99.00th=[22152], 99.50th=[22676], 99.90th=[22938], 99.95th=[22938], 00:15:05.375 | 99.99th=[22938] 00:15:05.375 write: IOPS=5132, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1002msec); 0 zone resets 00:15:05.375 slat (usec): min=2, max=5228, avg=98.41, stdev=418.66 00:15:05.375 clat (usec): min=1219, max=23829, avg=12948.40, stdev=4935.00 00:15:05.375 lat (usec): min=3422, max=24127, avg=13046.81, stdev=4960.06 00:15:05.375 clat percentiles (usec): 00:15:05.375 | 1.00th=[ 4883], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 8356], 00:15:05.375 | 30.00th=[ 9503], 40.00th=[10552], 50.00th=[11731], 60.00th=[13566], 00:15:05.375 | 70.00th=[15926], 80.00th=[18220], 90.00th=[20841], 95.00th=[21890], 00:15:05.375 | 99.00th=[23200], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:15:05.375 | 99.99th=[23725] 00:15:05.375 bw ( KiB/s): min=19296, max=21664, per=19.77%, avg=20480.00, stdev=1674.43, samples=2 00:15:05.375 iops : min= 4824, max= 5416, avg=5120.00, stdev=418.61, samples=2 00:15:05.375 lat (msec) : 2=0.01%, 4=0.45%, 10=38.23%, 20=52.37%, 50=8.94% 00:15:05.375 cpu : usr=2.70%, sys=4.70%, ctx=1063, majf=0, minf=1 00:15:05.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:05.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.375 issued rwts: total=5120,5143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.375 00:15:05.375 Run status group 0 (all jobs): 00:15:05.375 READ: bw=99.3MiB/s (104MB/s), 20.0MiB/s-29.9MiB/s (20.9MB/s-31.4MB/s), io=99.5MiB (104MB), run=1002-1002msec 00:15:05.375 WRITE: bw=101MiB/s (106MB/s), 20.0MiB/s-30.1MiB/s (21.0MB/s-31.5MB/s), io=101MiB (106MB), run=1002-1002msec 00:15:05.375 00:15:05.375 Disk stats (read/write): 00:15:05.375 nvme0n1: ios=6194/6532, merge=0/0, ticks=17314/17549, in_queue=34863, util=84.87% 00:15:05.375 nvme0n2: ios=6006/6144, merge=0/0, ticks=17978/17163, in_queue=35141, util=84.90% 00:15:05.375 nvme0n3: ios=4812/5120, merge=0/0, ticks=16866/17875, in_queue=34741, util=86.79% 00:15:05.375 nvme0n4: ios=4096/4148, merge=0/0, ticks=15413/15011, in_queue=30424, util=89.11% 00:15:05.375 13:45:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:05.375 [global] 00:15:05.375 thread=1 00:15:05.375 invalidate=1 00:15:05.375 rw=randwrite 00:15:05.375 time_based=1 00:15:05.375 runtime=1 00:15:05.375 ioengine=libaio 00:15:05.375 direct=1 00:15:05.375 bs=4096 00:15:05.375 iodepth=128 00:15:05.375 norandommap=0 00:15:05.375 numjobs=1 00:15:05.375 00:15:05.375 verify_dump=1 00:15:05.375 verify_backlog=512 00:15:05.375 verify_state_save=0 00:15:05.375 do_verify=1 00:15:05.375 verify=crc32c-intel 00:15:05.375 [job0] 00:15:05.375 filename=/dev/nvme0n1 00:15:05.375 [job1] 00:15:05.375 filename=/dev/nvme0n2 00:15:05.375 [job2] 00:15:05.375 filename=/dev/nvme0n3 00:15:05.375 [job3] 00:15:05.375 filename=/dev/nvme0n4 00:15:05.375 Could not set queue depth (nvme0n1) 00:15:05.375 Could not set queue depth (nvme0n2) 00:15:05.375 Could not set queue depth (nvme0n3) 00:15:05.375 Could not set queue depth (nvme0n4) 00:15:05.631 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.631 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.631 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.631 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.631 fio-3.35 00:15:05.631 Starting 4 threads 00:15:07.004 00:15:07.004 job0: (groupid=0, jobs=1): err= 0: pid=2478383: Mon Jul 15 13:45:33 2024 00:15:07.004 read: IOPS=6409, BW=25.0MiB/s (26.3MB/s)(25.1MiB/1002msec) 00:15:07.004 slat (usec): min=2, max=7517, avg=81.24, stdev=394.96 00:15:07.004 clat (usec): min=1575, max=25012, avg=10480.34, stdev=4464.88 00:15:07.004 lat (usec): min=2320, max=25015, avg=10561.58, stdev=4485.05 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 3589], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5997], 00:15:07.004 | 30.00th=[ 6980], 40.00th=[ 8291], 50.00th=[ 9765], 60.00th=[11600], 00:15:07.004 | 70.00th=[13435], 80.00th=[15270], 90.00th=[16450], 95.00th=[17957], 00:15:07.004 | 99.00th=[20579], 99.50th=[21103], 99.90th=[23725], 99.95th=[25035], 00:15:07.004 | 99.99th=[25035] 00:15:07.004 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:15:07.004 slat (usec): min=2, max=4020, avg=67.85, stdev=313.30 00:15:07.004 clat (usec): min=2729, max=25714, avg=8939.73, stdev=3752.13 00:15:07.004 lat (usec): min=2733, max=25717, avg=9007.58, stdev=3771.62 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 3556], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5473], 00:15:07.004 | 30.00th=[ 6063], 40.00th=[ 6783], 50.00th=[ 8029], 60.00th=[ 9765], 00:15:07.004 | 70.00th=[10945], 80.00th=[12518], 90.00th=[14353], 95.00th=[15401], 00:15:07.004 | 99.00th=[19792], 99.50th=[21890], 99.90th=[24249], 99.95th=[24249], 00:15:07.004 | 99.99th=[25822] 00:15:07.004 bw ( KiB/s): min=24576, max=28672, per=27.06%, avg=26624.00, stdev=2896.31, samples=2 00:15:07.004 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:15:07.004 lat (msec) : 2=0.01%, 4=2.10%, 10=55.53%, 20=41.19%, 50=1.17% 00:15:07.004 cpu : usr=3.50%, sys=5.19%, ctx=1316, majf=0, minf=1 00:15:07.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:07.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.004 issued rwts: total=6422,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.004 job1: (groupid=0, jobs=1): err= 0: pid=2478389: Mon Jul 15 13:45:33 2024 00:15:07.004 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:15:07.004 slat (usec): min=2, max=7847, avg=76.80, stdev=392.54 00:15:07.004 clat (usec): min=3129, max=25127, avg=10025.71, stdev=4236.39 00:15:07.004 lat (usec): min=3132, max=26455, avg=10102.51, stdev=4260.02 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 6783], 00:15:07.004 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8586], 60.00th=[ 9896], 00:15:07.004 | 70.00th=[11469], 80.00th=[13042], 90.00th=[16450], 95.00th=[19530], 00:15:07.004 | 99.00th=[22938], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:15:07.004 | 99.99th=[25035] 00:15:07.004 write: IOPS=6549, BW=25.6MiB/s (26.8MB/s)(25.7MiB/1003msec); 0 zone resets 00:15:07.004 slat (usec): min=2, max=5518, avg=76.44, stdev=352.36 00:15:07.004 clat (usec): min=1563, max=24381, avg=9943.99, stdev=4521.99 00:15:07.004 lat (usec): min=3744, max=24386, avg=10020.43, stdev=4547.73 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:15:07.004 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7767], 60.00th=[ 9634], 00:15:07.004 | 70.00th=[11600], 80.00th=[13304], 90.00th=[18220], 95.00th=[19530], 00:15:07.004 | 99.00th=[21627], 99.50th=[22676], 99.90th=[24249], 99.95th=[24249], 00:15:07.004 | 99.99th=[24511] 00:15:07.004 bw ( KiB/s): min=22216, max=29320, per=26.19%, avg=25768.00, stdev=5023.29, samples=2 00:15:07.004 iops : min= 5554, max= 7330, avg=6442.00, stdev=1255.82, samples=2 00:15:07.004 lat (msec) : 2=0.01%, 4=0.55%, 10=60.51%, 20=35.17%, 50=3.76% 00:15:07.004 cpu : usr=3.29%, sys=5.39%, ctx=1449, majf=0, minf=1 00:15:07.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:07.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.004 issued rwts: total=6144,6569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.004 job2: (groupid=0, jobs=1): err= 0: pid=2478398: Mon Jul 15 13:45:33 2024 00:15:07.004 read: IOPS=5380, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1002msec) 00:15:07.004 slat (usec): min=2, max=6652, avg=83.93, stdev=376.89 00:15:07.004 clat (usec): min=826, max=26412, avg=10855.29, stdev=4065.57 00:15:07.004 lat (usec): min=2851, max=26415, avg=10939.21, stdev=4084.95 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 4359], 5.00th=[ 5997], 10.00th=[ 7111], 20.00th=[ 7832], 00:15:07.004 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[11076], 00:15:07.004 | 70.00th=[11994], 80.00th=[14222], 90.00th=[16712], 95.00th=[19530], 00:15:07.004 | 99.00th=[22152], 99.50th=[23725], 99.90th=[26346], 99.95th=[26346], 00:15:07.004 | 99.99th=[26346] 00:15:07.004 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:15:07.004 slat (usec): min=2, max=5846, avg=93.21, stdev=403.80 00:15:07.004 clat (usec): min=3952, max=27631, avg=12132.27, stdev=5419.26 00:15:07.004 lat (usec): min=3999, max=27635, avg=12225.48, stdev=5453.42 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 5145], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 7701], 00:15:07.004 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[10421], 60.00th=[11863], 00:15:07.004 | 70.00th=[13829], 80.00th=[16909], 90.00th=[21103], 95.00th=[23462], 00:15:07.004 | 99.00th=[26084], 99.50th=[26608], 99.90th=[27395], 99.95th=[27657], 00:15:07.004 | 99.99th=[27657] 00:15:07.004 bw ( KiB/s): min=20480, max=24576, per=22.89%, avg=22528.00, stdev=2896.31, samples=2 00:15:07.004 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:07.004 lat (usec) : 1000=0.01% 00:15:07.004 lat (msec) : 4=0.20%, 10=50.41%, 20=40.62%, 50=8.76% 00:15:07.004 cpu : usr=2.60%, sys=5.00%, ctx=1227, majf=0, minf=1 00:15:07.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:07.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.004 issued rwts: total=5391,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.004 job3: (groupid=0, jobs=1): err= 0: pid=2478401: Mon Jul 15 13:45:33 2024 00:15:07.004 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:15:07.004 slat (usec): min=2, max=5019, avg=83.01, stdev=391.46 00:15:07.004 clat (usec): min=3605, max=24546, avg=10998.65, stdev=4344.32 00:15:07.004 lat (usec): min=3682, max=24549, avg=11081.67, stdev=4362.44 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 4883], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7046], 00:15:07.004 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11338], 00:15:07.004 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17171], 95.00th=[19268], 00:15:07.004 | 99.00th=[22414], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:15:07.004 | 99.99th=[24511] 00:15:07.004 write: IOPS=5800, BW=22.7MiB/s (23.8MB/s)(22.7MiB/1003msec); 0 zone resets 00:15:07.004 slat (usec): min=2, max=6547, avg=87.39, stdev=399.36 00:15:07.004 clat (usec): min=1930, max=24978, avg=11099.79, stdev=4657.82 00:15:07.004 lat (usec): min=2296, max=24982, avg=11187.18, stdev=4683.16 00:15:07.004 clat percentiles (usec): 00:15:07.004 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6980], 00:15:07.004 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 9372], 60.00th=[11469], 00:15:07.004 | 70.00th=[14353], 80.00th=[15270], 90.00th=[18220], 95.00th=[20055], 00:15:07.004 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24773], 99.95th=[25035], 00:15:07.004 | 99.99th=[25035] 00:15:07.004 bw ( KiB/s): min=21152, max=24376, per=23.13%, avg=22764.00, stdev=2279.71, samples=2 00:15:07.004 iops : min= 5288, max= 6094, avg=5691.00, stdev=569.93, samples=2 00:15:07.004 lat (msec) : 2=0.01%, 4=0.27%, 10=52.61%, 20=43.46%, 50=3.65% 00:15:07.004 cpu : usr=2.79%, sys=4.89%, ctx=1101, majf=0, minf=1 00:15:07.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:07.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:07.004 issued rwts: total=5632,5818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:07.004 00:15:07.004 Run status group 0 (all jobs): 00:15:07.004 READ: bw=91.9MiB/s (96.3MB/s), 21.0MiB/s-25.0MiB/s (22.0MB/s-26.3MB/s), io=92.1MiB (96.6MB), run=1002-1003msec 00:15:07.004 WRITE: bw=96.1MiB/s (101MB/s), 22.0MiB/s-25.9MiB/s (23.0MB/s-27.2MB/s), io=96.4MiB (101MB), run=1002-1003msec 00:15:07.004 00:15:07.004 Disk stats (read/write): 00:15:07.004 nvme0n1: ios=5679/5654, merge=0/0, ticks=17313/14321, in_queue=31634, util=84.07% 00:15:07.004 nvme0n2: ios=5504/5632, merge=0/0, ticks=15949/14494, in_queue=30443, util=84.71% 00:15:07.004 nvme0n3: ios=4449/4608, merge=0/0, ticks=13782/15551, in_queue=29333, util=88.25% 00:15:07.004 nvme0n4: ios=4608/5028, merge=0/0, ticks=14242/16977, in_queue=31219, util=88.99% 00:15:07.004 13:45:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:07.004 13:45:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2478567 00:15:07.004 13:45:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:07.004 13:45:33 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:07.004 [global] 00:15:07.004 thread=1 00:15:07.004 invalidate=1 00:15:07.004 rw=read 00:15:07.004 time_based=1 00:15:07.004 runtime=10 00:15:07.004 ioengine=libaio 00:15:07.004 direct=1 00:15:07.004 bs=4096 00:15:07.004 iodepth=1 00:15:07.004 norandommap=1 00:15:07.004 numjobs=1 00:15:07.004 00:15:07.004 [job0] 00:15:07.004 filename=/dev/nvme0n1 00:15:07.004 [job1] 00:15:07.004 filename=/dev/nvme0n2 00:15:07.004 [job2] 00:15:07.004 filename=/dev/nvme0n3 00:15:07.004 [job3] 00:15:07.004 filename=/dev/nvme0n4 00:15:07.004 Could not set queue depth (nvme0n1) 00:15:07.004 Could not set queue depth (nvme0n2) 00:15:07.004 Could not set queue depth (nvme0n3) 00:15:07.004 Could not set queue depth (nvme0n4) 00:15:07.004 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.004 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.004 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.004 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.004 fio-3.35 00:15:07.004 Starting 4 threads 00:15:10.275 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:10.275 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=70930432, buflen=4096 00:15:10.275 fio: pid=2478793, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:10.275 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:10.275 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=78012416, buflen=4096 00:15:10.275 fio: pid=2478788, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:10.275 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.275 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:10.275 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=61333504, buflen=4096 00:15:10.275 fio: pid=2478762, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:10.531 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.531 13:45:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:10.531 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2940928, buflen=4096 00:15:10.531 fio: pid=2478777, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:10.531 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.531 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:10.531 00:15:10.531 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2478762: Mon Jul 15 13:45:37 2024 00:15:10.531 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(122MiB/3130msec) 00:15:10.531 slat (usec): min=3, max=22258, avg= 9.55, stdev=184.73 00:15:10.531 clat (usec): min=52, max=556, avg=88.44, stdev=17.17 00:15:10.531 lat (usec): min=59, max=22371, avg=97.99, stdev=185.73 00:15:10.531 clat percentiles (usec): 00:15:10.531 | 1.00th=[ 64], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 76], 00:15:10.531 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 88], 00:15:10.531 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 118], 00:15:10.531 | 99.00th=[ 147], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 192], 00:15:10.531 | 99.99th=[ 229] 00:15:10.531 bw ( KiB/s): min=35119, max=50616, per=33.23%, avg=40650.50, stdev=5328.10, samples=6 00:15:10.531 iops : min= 8779, max=12654, avg=10162.50, stdev=1332.18, samples=6 00:15:10.531 lat (usec) : 100=76.38%, 250=23.61%, 500=0.01%, 750=0.01% 00:15:10.531 cpu : usr=2.75%, sys=9.08%, ctx=31365, majf=0, minf=1 00:15:10.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.531 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.531 issued rwts: total=31359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.531 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2478777: Mon Jul 15 13:45:37 2024 00:15:10.531 read: IOPS=10.1k, BW=39.5MiB/s (41.5MB/s)(131MiB/3309msec) 00:15:10.531 slat (usec): min=3, max=29369, avg=10.95, stdev=190.34 00:15:10.531 clat (usec): min=39, max=538, avg=86.44, stdev=16.18 00:15:10.532 lat (usec): min=59, max=29481, avg=97.39, stdev=191.10 00:15:10.532 clat percentiles (usec): 00:15:10.532 | 1.00th=[ 56], 5.00th=[ 61], 10.00th=[ 73], 20.00th=[ 77], 00:15:10.532 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:15:10.532 | 70.00th=[ 89], 80.00th=[ 101], 90.00th=[ 112], 95.00th=[ 117], 00:15:10.532 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 157], 00:15:10.532 | 99.99th=[ 190] 00:15:10.532 bw ( KiB/s): min=38000, max=44296, per=32.93%, avg=40289.33, stdev=2692.46, samples=6 00:15:10.532 iops : min= 9500, max=11074, avg=10072.33, stdev=673.11, samples=6 00:15:10.532 lat (usec) : 50=0.01%, 100=79.15%, 250=20.83%, 500=0.01%, 750=0.01% 00:15:10.532 cpu : usr=3.17%, sys=11.00%, ctx=33497, majf=0, minf=1 00:15:10.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 issued rwts: total=33487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.532 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2478788: Mon Jul 15 13:45:37 2024 00:15:10.532 read: IOPS=6469, BW=25.3MiB/s (26.5MB/s)(74.4MiB/2944msec) 00:15:10.532 slat (usec): min=8, max=13898, avg=10.84, stdev=132.18 00:15:10.532 clat (usec): min=68, max=260, avg=141.13, stdev=28.32 00:15:10.532 lat (usec): min=82, max=13995, avg=151.98, stdev=134.77 00:15:10.532 clat percentiles (usec): 00:15:10.532 | 1.00th=[ 82], 5.00th=[ 88], 10.00th=[ 95], 20.00th=[ 117], 00:15:10.532 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:15:10.532 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 192], 00:15:10.532 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 219], 99.95th=[ 227], 00:15:10.532 | 99.99th=[ 249] 00:15:10.532 bw ( KiB/s): min=24696, max=26512, per=20.66%, avg=25281.60, stdev=732.04, samples=5 00:15:10.532 iops : min= 6174, max= 6628, avg=6320.40, stdev=183.01, samples=5 00:15:10.532 lat (usec) : 100=12.33%, 250=87.66%, 500=0.01% 00:15:10.532 cpu : usr=2.79%, sys=7.03%, ctx=19050, majf=0, minf=1 00:15:10.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 issued rwts: total=19047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.532 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2478793: Mon Jul 15 13:45:37 2024 00:15:10.532 read: IOPS=6302, BW=24.6MiB/s (25.8MB/s)(67.6MiB/2748msec) 00:15:10.532 slat (nsec): min=8310, max=57510, avg=9913.27, stdev=1244.55 00:15:10.532 clat (usec): min=69, max=228, avg=145.56, stdev=22.24 00:15:10.532 lat (usec): min=87, max=253, avg=155.47, stdev=22.28 00:15:10.532 clat percentiles (usec): 00:15:10.532 | 1.00th=[ 94], 5.00th=[ 103], 10.00th=[ 114], 20.00th=[ 133], 00:15:10.532 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:15:10.532 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 192], 00:15:10.532 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 217], 99.95th=[ 221], 00:15:10.532 | 99.99th=[ 225] 00:15:10.532 bw ( KiB/s): min=24704, max=26552, per=20.67%, avg=25289.60, stdev=748.45, samples=5 00:15:10.532 iops : min= 6176, max= 6638, avg=6322.40, stdev=187.11, samples=5 00:15:10.532 lat (usec) : 100=3.40%, 250=96.59% 00:15:10.532 cpu : usr=2.91%, sys=7.10%, ctx=17319, majf=0, minf=2 00:15:10.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.532 issued rwts: total=17318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.532 00:15:10.532 Run status group 0 (all jobs): 00:15:10.532 READ: bw=119MiB/s (125MB/s), 24.6MiB/s-39.5MiB/s (25.8MB/s-41.5MB/s), io=395MiB (415MB), run=2748-3309msec 00:15:10.532 00:15:10.532 Disk stats (read/write): 00:15:10.532 nvme0n1: ios=31321/0, merge=0/0, ticks=2605/0, in_queue=2605, util=93.77% 00:15:10.532 nvme0n2: ios=31154/0, merge=0/0, ticks=2579/0, in_queue=2579, util=95.02% 00:15:10.532 nvme0n3: ios=18365/0, merge=0/0, ticks=2518/0, in_queue=2518, util=95.70% 00:15:10.532 nvme0n4: ios=16392/0, merge=0/0, ticks=2325/0, in_queue=2325, util=96.44% 00:15:10.788 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.788 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:11.044 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.044 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:11.299 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.299 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:11.556 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.556 13:45:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:11.556 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:11.556 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 2478567 00:15:11.556 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:11.556 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:12.482 nvmf hotplug test: fio failed as expected 00:15:12.482 13:45:38 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:12.738 rmmod nvme_rdma 00:15:12.738 rmmod nvme_fabrics 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2476238 ']' 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2476238 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2476238 ']' 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2476238 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.738 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2476238 00:15:12.996 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.996 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.996 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2476238' 00:15:12.996 killing process with pid 2476238 00:15:12.996 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2476238 00:15:12.996 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2476238 00:15:13.255 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:13.255 00:15:13.255 real 0m26.747s 00:15:13.255 user 1m36.615s 00:15:13.255 sys 0m10.652s 00:15:13.255 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.255 13:45:39 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.255 ************************************ 00:15:13.255 END TEST nvmf_fio_target 00:15:13.255 ************************************ 00:15:13.255 13:45:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:13.255 13:45:39 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:13.255 13:45:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.255 13:45:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.255 13:45:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:13.255 ************************************ 00:15:13.255 START TEST nvmf_bdevio 00:15:13.255 ************************************ 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:13.255 * Looking for test storage... 00:15:13.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.255 13:45:39 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.513 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.513 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.513 13:45:39 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.513 13:45:39 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:20.076 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:20.076 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:20.076 Found net devices under 0000:18:00.0: mlx_0_0 00:15:20.076 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:20.077 Found net devices under 0000:18:00.1: mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:20.077 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.077 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:15:20.077 altname enp24s0f0np0 00:15:20.077 altname ens785f0np0 00:15:20.077 inet 192.168.100.8/24 scope global mlx_0_0 00:15:20.077 valid_lft forever preferred_lft forever 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:20.077 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.077 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:15:20.077 altname enp24s0f1np1 00:15:20.077 altname ens785f1np1 00:15:20.077 inet 192.168.100.9/24 scope global mlx_0_1 00:15:20.077 valid_lft forever preferred_lft forever 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:20.077 192.168.100.9' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:20.077 192.168.100.9' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:20.077 192.168.100.9' 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:15:20.077 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2482528 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2482528 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2482528 ']' 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.335 13:45:46 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.335 [2024-07-15 13:45:46.691267] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:20.335 [2024-07-15 13:45:46.691325] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.335 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.335 [2024-07-15 13:45:46.779019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.593 [2024-07-15 13:45:46.868644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.593 [2024-07-15 13:45:46.868697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.593 [2024-07-15 13:45:46.868723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.593 [2024-07-15 13:45:46.868732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.593 [2024-07-15 13:45:46.868739] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.593 [2024-07-15 13:45:46.868854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:20.593 [2024-07-15 13:45:46.868943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:20.593 [2024-07-15 13:45:46.869041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.593 [2024-07-15 13:45:46.869042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.158 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.158 [2024-07-15 13:45:47.593480] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x216ca90/0x2170f80) succeed. 00:15:21.158 [2024-07-15 13:45:47.603041] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x216e0d0/0x21b2610) succeed. 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 Malloc0 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.419 [2024-07-15 13:45:47.773696] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:21.419 { 00:15:21.419 "params": { 00:15:21.419 "name": "Nvme$subsystem", 00:15:21.419 "trtype": "$TEST_TRANSPORT", 00:15:21.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:21.419 "adrfam": "ipv4", 00:15:21.419 "trsvcid": "$NVMF_PORT", 00:15:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:21.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:21.419 "hdgst": ${hdgst:-false}, 00:15:21.419 "ddgst": ${ddgst:-false} 00:15:21.419 }, 00:15:21.419 "method": "bdev_nvme_attach_controller" 00:15:21.419 } 00:15:21.419 EOF 00:15:21.419 )") 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:21.419 13:45:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:21.419 "params": { 00:15:21.419 "name": "Nvme1", 00:15:21.419 "trtype": "rdma", 00:15:21.419 "traddr": "192.168.100.8", 00:15:21.419 "adrfam": "ipv4", 00:15:21.419 "trsvcid": "4420", 00:15:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:21.419 "hdgst": false, 00:15:21.419 "ddgst": false 00:15:21.419 }, 00:15:21.419 "method": "bdev_nvme_attach_controller" 00:15:21.419 }' 00:15:21.419 [2024-07-15 13:45:47.823378] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:15:21.419 [2024-07-15 13:45:47.823436] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482729 ] 00:15:21.419 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.419 [2024-07-15 13:45:47.909178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.676 [2024-07-15 13:45:47.996144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.676 [2024-07-15 13:45:47.996243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.676 [2024-07-15 13:45:47.996244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.676 I/O targets: 00:15:21.676 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:21.676 00:15:21.676 00:15:21.676 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.676 http://cunit.sourceforge.net/ 00:15:21.676 00:15:21.676 00:15:21.676 Suite: bdevio tests on: Nvme1n1 00:15:21.676 Test: blockdev write read block ...passed 00:15:21.676 Test: blockdev write zeroes read block ...passed 00:15:21.676 Test: blockdev write zeroes read no split ...passed 00:15:21.676 Test: blockdev write zeroes read split ...passed 00:15:21.933 Test: blockdev write zeroes read split partial ...passed 00:15:21.933 Test: blockdev reset ...[2024-07-15 13:45:48.208949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:21.934 [2024-07-15 13:45:48.231778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.934 [2024-07-15 13:45:48.258537] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:21.934 passed 00:15:21.934 Test: blockdev write read 8 blocks ...passed 00:15:21.934 Test: blockdev write read size > 128k ...passed 00:15:21.934 Test: blockdev write read invalid size ...passed 00:15:21.934 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:21.934 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:21.934 Test: blockdev write read max offset ...passed 00:15:21.934 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.934 Test: blockdev writev readv 8 blocks ...passed 00:15:21.934 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.934 Test: blockdev writev readv block ...passed 00:15:21.934 Test: blockdev writev readv size > 128k ...passed 00:15:21.934 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.934 Test: blockdev comparev and writev ...[2024-07-15 13:45:48.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.262669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.934 [2024-07-15 13:45:48.262678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:21.934 passed 00:15:21.934 Test: blockdev nvme passthru rw ...passed 00:15:21.934 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:45:48.262992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:21.934 [2024-07-15 13:45:48.263004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.263048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:21.934 [2024-07-15 13:45:48.263058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.263100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:21.934 [2024-07-15 13:45:48.263111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:21.934 [2024-07-15 13:45:48.263161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:21.934 [2024-07-15 13:45:48.263172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:21.934 passed 00:15:21.934 Test: blockdev nvme admin passthru ...passed 00:15:21.934 Test: blockdev copy ...passed 00:15:21.934 00:15:21.934 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.934 suites 1 1 n/a 0 0 00:15:21.934 tests 23 23 23 0 0 00:15:21.934 asserts 152 152 152 0 n/a 00:15:21.934 00:15:21.934 Elapsed time = 0.173 seconds 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:22.191 13:45:48 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:22.192 rmmod nvme_rdma 00:15:22.192 rmmod nvme_fabrics 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2482528 ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2482528 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2482528 ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2482528 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2482528 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2482528' 00:15:22.192 killing process with pid 2482528 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2482528 00:15:22.192 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2482528 00:15:22.449 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.449 13:45:48 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:22.449 00:15:22.449 real 0m9.266s 00:15:22.449 user 0m11.068s 00:15:22.449 sys 0m5.903s 00:15:22.449 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.449 13:45:48 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.449 ************************************ 00:15:22.449 END TEST nvmf_bdevio 00:15:22.449 ************************************ 00:15:22.449 13:45:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:22.449 13:45:48 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:22.449 13:45:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:22.449 13:45:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.449 13:45:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:22.709 ************************************ 00:15:22.709 START TEST nvmf_auth_target 00:15:22.709 ************************************ 00:15:22.709 13:45:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:15:22.709 * Looking for test storage... 00:15:22.709 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.709 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:22.710 13:45:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:29.278 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:29.278 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.278 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:29.279 Found net devices under 0000:18:00.0: mlx_0_0 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:29.279 Found net devices under 0000:18:00.1: mlx_0_1 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.279 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.539 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:29.540 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.540 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:15:29.540 altname enp24s0f0np0 00:15:29.540 altname ens785f0np0 00:15:29.540 inet 192.168.100.8/24 scope global mlx_0_0 00:15:29.540 valid_lft forever preferred_lft forever 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:29.540 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.540 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:15:29.540 altname enp24s0f1np1 00:15:29.540 altname ens785f1np1 00:15:29.540 inet 192.168.100.9/24 scope global mlx_0_1 00:15:29.540 valid_lft forever preferred_lft forever 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:29.540 192.168.100.9' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:29.540 192.168.100.9' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:29.540 192.168.100.9' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2485752 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2485752 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2485752 ']' 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.540 13:45:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.540 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.540 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.540 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2485846 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.476 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fe9b8f280472693124baaeaa6bd089614a1c8bbb9065434 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.K7X 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fe9b8f280472693124baaeaa6bd089614a1c8bbb9065434 0 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fe9b8f280472693124baaeaa6bd089614a1c8bbb9065434 0 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fe9b8f280472693124baaeaa6bd089614a1c8bbb9065434 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.K7X 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.K7X 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.K7X 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:30.477 13:45:56 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:30.477 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc1de360396234df7748ad777908a0aa08a4863f3cdef15f95bd35bdee14c8b9 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8mT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc1de360396234df7748ad777908a0aa08a4863f3cdef15f95bd35bdee14c8b9 3 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc1de360396234df7748ad777908a0aa08a4863f3cdef15f95bd35bdee14c8b9 3 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc1de360396234df7748ad777908a0aa08a4863f3cdef15f95bd35bdee14c8b9 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8mT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8mT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8mT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77bfdd49199b9994750ba34fc640bdda 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Hl7 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77bfdd49199b9994750ba34fc640bdda 1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77bfdd49199b9994750ba34fc640bdda 1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77bfdd49199b9994750ba34fc640bdda 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Hl7 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Hl7 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Hl7 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fdd75184234b0f535ad80e7b78c71bdb7164dc7962e8a383 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yGT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fdd75184234b0f535ad80e7b78c71bdb7164dc7962e8a383 2 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fdd75184234b0f535ad80e7b78c71bdb7164dc7962e8a383 2 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fdd75184234b0f535ad80e7b78c71bdb7164dc7962e8a383 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yGT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yGT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yGT 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53a9e4774285b82cbb032ab49e7bf669142af089f9f66857 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lVx 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53a9e4774285b82cbb032ab49e7bf669142af089f9f66857 2 00:15:30.736 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53a9e4774285b82cbb032ab49e7bf669142af089f9f66857 2 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53a9e4774285b82cbb032ab49e7bf669142af089f9f66857 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lVx 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lVx 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.lVx 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:30.737 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=633d797981923dc41aac83e26a624bf9 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tjJ 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 633d797981923dc41aac83e26a624bf9 1 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 633d797981923dc41aac83e26a624bf9 1 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=633d797981923dc41aac83e26a624bf9 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tjJ 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tjJ 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.tjJ 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b26228f933d4879f962d745cd50172fd53d63d5bf733f2e0658218478c5ca14f 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PAK 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b26228f933d4879f962d745cd50172fd53d63d5bf733f2e0658218478c5ca14f 3 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b26228f933d4879f962d745cd50172fd53d63d5bf733f2e0658218478c5ca14f 3 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b26228f933d4879f962d745cd50172fd53d63d5bf733f2e0658218478c5ca14f 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PAK 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PAK 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PAK 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2485752 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2485752 ']' 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.053 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2485846 /var/tmp/host.sock 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2485846 ']' 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:31.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.311 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K7X 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.K7X 00:15:31.569 13:45:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.K7X 00:15:31.569 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8mT ]] 00:15:31.569 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8mT 00:15:31.569 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.569 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8mT 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8mT 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Hl7 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Hl7 00:15:31.825 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Hl7 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yGT ]] 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yGT 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yGT 00:15:32.081 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yGT 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lVx 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lVx 00:15:32.337 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lVx 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.tjJ ]] 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tjJ 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tjJ 00:15:32.594 13:45:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tjJ 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PAK 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PAK 00:15:32.594 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PAK 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.851 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:33.108 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:33.108 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.108 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.109 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.366 00:15:33.366 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.366 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.366 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.624 { 00:15:33.624 "cntlid": 1, 00:15:33.624 "qid": 0, 00:15:33.624 "state": "enabled", 00:15:33.624 "thread": "nvmf_tgt_poll_group_000", 00:15:33.624 "listen_address": { 00:15:33.624 "trtype": "RDMA", 00:15:33.624 "adrfam": "IPv4", 00:15:33.624 "traddr": "192.168.100.8", 00:15:33.624 "trsvcid": "4420" 00:15:33.624 }, 00:15:33.624 "peer_address": { 00:15:33.624 "trtype": "RDMA", 00:15:33.624 "adrfam": "IPv4", 00:15:33.624 "traddr": "192.168.100.8", 00:15:33.624 "trsvcid": "46195" 00:15:33.624 }, 00:15:33.624 "auth": { 00:15:33.624 "state": "completed", 00:15:33.624 "digest": "sha256", 00:15:33.624 "dhgroup": "null" 00:15:33.624 } 00:15:33.624 } 00:15:33.624 ]' 00:15:33.624 13:45:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.624 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.881 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:15:34.449 13:46:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.707 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.964 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.964 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.222 { 00:15:35.222 "cntlid": 3, 00:15:35.222 "qid": 0, 00:15:35.222 "state": "enabled", 00:15:35.222 "thread": "nvmf_tgt_poll_group_000", 00:15:35.222 "listen_address": { 00:15:35.222 "trtype": "RDMA", 00:15:35.222 "adrfam": "IPv4", 00:15:35.222 "traddr": "192.168.100.8", 00:15:35.222 "trsvcid": "4420" 00:15:35.222 }, 00:15:35.222 "peer_address": { 00:15:35.222 "trtype": "RDMA", 00:15:35.222 "adrfam": "IPv4", 00:15:35.222 "traddr": "192.168.100.8", 00:15:35.222 "trsvcid": "58090" 00:15:35.222 }, 00:15:35.222 "auth": { 00:15:35.222 "state": "completed", 00:15:35.222 "digest": "sha256", 00:15:35.222 "dhgroup": "null" 00:15:35.222 } 00:15:35.222 } 00:15:35.222 ]' 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.222 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.481 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.481 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.481 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.481 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.481 13:46:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.739 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.306 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.565 13:46:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.827 00:15:36.827 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.827 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.827 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.084 { 00:15:37.084 "cntlid": 5, 00:15:37.084 "qid": 0, 00:15:37.084 "state": "enabled", 00:15:37.084 "thread": "nvmf_tgt_poll_group_000", 00:15:37.084 "listen_address": { 00:15:37.084 "trtype": "RDMA", 00:15:37.084 "adrfam": "IPv4", 00:15:37.084 "traddr": "192.168.100.8", 00:15:37.084 "trsvcid": "4420" 00:15:37.084 }, 00:15:37.084 "peer_address": { 00:15:37.084 "trtype": "RDMA", 00:15:37.084 "adrfam": "IPv4", 00:15:37.084 "traddr": "192.168.100.8", 00:15:37.084 "trsvcid": "57953" 00:15:37.084 }, 00:15:37.084 "auth": { 00:15:37.084 "state": "completed", 00:15:37.084 "digest": "sha256", 00:15:37.084 "dhgroup": "null" 00:15:37.084 } 00:15:37.084 } 00:15:37.084 ]' 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.084 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.342 13:46:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:15:37.907 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.165 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.424 00:15:38.424 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.424 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.424 13:46:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.682 { 00:15:38.682 "cntlid": 7, 00:15:38.682 "qid": 0, 00:15:38.682 "state": "enabled", 00:15:38.682 "thread": "nvmf_tgt_poll_group_000", 00:15:38.682 "listen_address": { 00:15:38.682 "trtype": "RDMA", 00:15:38.682 "adrfam": "IPv4", 00:15:38.682 "traddr": "192.168.100.8", 00:15:38.682 "trsvcid": "4420" 00:15:38.682 }, 00:15:38.682 "peer_address": { 00:15:38.682 "trtype": "RDMA", 00:15:38.682 "adrfam": "IPv4", 00:15:38.682 "traddr": "192.168.100.8", 00:15:38.682 "trsvcid": "39079" 00:15:38.682 }, 00:15:38.682 "auth": { 00:15:38.682 "state": "completed", 00:15:38.682 "digest": "sha256", 00:15:38.682 "dhgroup": "null" 00:15:38.682 } 00:15:38.682 } 00:15:38.682 ]' 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.682 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:38.941 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.941 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.941 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.941 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.941 13:46:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.874 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.160 00:15:40.160 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.160 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.160 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.419 { 00:15:40.419 "cntlid": 9, 00:15:40.419 "qid": 0, 00:15:40.419 "state": "enabled", 00:15:40.419 "thread": "nvmf_tgt_poll_group_000", 00:15:40.419 "listen_address": { 00:15:40.419 "trtype": "RDMA", 00:15:40.419 "adrfam": "IPv4", 00:15:40.419 "traddr": "192.168.100.8", 00:15:40.419 "trsvcid": "4420" 00:15:40.419 }, 00:15:40.419 "peer_address": { 00:15:40.419 "trtype": "RDMA", 00:15:40.419 "adrfam": "IPv4", 00:15:40.419 "traddr": "192.168.100.8", 00:15:40.419 "trsvcid": "46812" 00:15:40.419 }, 00:15:40.419 "auth": { 00:15:40.419 "state": "completed", 00:15:40.419 "digest": "sha256", 00:15:40.419 "dhgroup": "ffdhe2048" 00:15:40.419 } 00:15:40.419 } 00:15:40.419 ]' 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.419 13:46:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.678 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:15:41.283 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.541 13:46:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.541 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.799 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.799 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.058 { 00:15:42.058 "cntlid": 11, 00:15:42.058 "qid": 0, 00:15:42.058 "state": "enabled", 00:15:42.058 "thread": "nvmf_tgt_poll_group_000", 00:15:42.058 "listen_address": { 00:15:42.058 "trtype": "RDMA", 00:15:42.058 "adrfam": "IPv4", 00:15:42.058 "traddr": "192.168.100.8", 00:15:42.058 "trsvcid": "4420" 00:15:42.058 }, 00:15:42.058 "peer_address": { 00:15:42.058 "trtype": "RDMA", 00:15:42.058 "adrfam": "IPv4", 00:15:42.058 "traddr": "192.168.100.8", 00:15:42.058 "trsvcid": "51293" 00:15:42.058 }, 00:15:42.058 "auth": { 00:15:42.058 "state": "completed", 00:15:42.058 "digest": "sha256", 00:15:42.058 "dhgroup": "ffdhe2048" 00:15:42.058 } 00:15:42.058 } 00:15:42.058 ]' 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.058 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.318 13:46:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.254 13:46:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.511 00:15:43.511 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.511 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.511 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.769 { 00:15:43.769 "cntlid": 13, 00:15:43.769 "qid": 0, 00:15:43.769 "state": "enabled", 00:15:43.769 "thread": "nvmf_tgt_poll_group_000", 00:15:43.769 "listen_address": { 00:15:43.769 "trtype": "RDMA", 00:15:43.769 "adrfam": "IPv4", 00:15:43.769 "traddr": "192.168.100.8", 00:15:43.769 "trsvcid": "4420" 00:15:43.769 }, 00:15:43.769 "peer_address": { 00:15:43.769 "trtype": "RDMA", 00:15:43.769 "adrfam": "IPv4", 00:15:43.769 "traddr": "192.168.100.8", 00:15:43.769 "trsvcid": "57621" 00:15:43.769 }, 00:15:43.769 "auth": { 00:15:43.769 "state": "completed", 00:15:43.769 "digest": "sha256", 00:15:43.769 "dhgroup": "ffdhe2048" 00:15:43.769 } 00:15:43.769 } 00:15:43.769 ]' 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.769 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.027 13:46:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.961 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:45.219 00:15:45.219 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.219 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.219 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.503 { 00:15:45.503 "cntlid": 15, 00:15:45.503 "qid": 0, 00:15:45.503 "state": "enabled", 00:15:45.503 "thread": "nvmf_tgt_poll_group_000", 00:15:45.503 "listen_address": { 00:15:45.503 "trtype": "RDMA", 00:15:45.503 "adrfam": "IPv4", 00:15:45.503 "traddr": "192.168.100.8", 00:15:45.503 "trsvcid": "4420" 00:15:45.503 }, 00:15:45.503 "peer_address": { 00:15:45.503 "trtype": "RDMA", 00:15:45.503 "adrfam": "IPv4", 00:15:45.503 "traddr": "192.168.100.8", 00:15:45.503 "trsvcid": "49718" 00:15:45.503 }, 00:15:45.503 "auth": { 00:15:45.503 "state": "completed", 00:15:45.503 "digest": "sha256", 00:15:45.503 "dhgroup": "ffdhe2048" 00:15:45.503 } 00:15:45.503 } 00:15:45.503 ]' 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.503 13:46:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.503 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.503 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.503 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.759 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:15:46.324 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:46.581 13:46:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.838 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.149 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.149 { 00:15:47.149 "cntlid": 17, 00:15:47.149 "qid": 0, 00:15:47.149 "state": "enabled", 00:15:47.149 "thread": "nvmf_tgt_poll_group_000", 00:15:47.149 "listen_address": { 00:15:47.149 "trtype": "RDMA", 00:15:47.149 "adrfam": "IPv4", 00:15:47.149 "traddr": "192.168.100.8", 00:15:47.149 "trsvcid": "4420" 00:15:47.149 }, 00:15:47.149 "peer_address": { 00:15:47.149 "trtype": "RDMA", 00:15:47.149 "adrfam": "IPv4", 00:15:47.149 "traddr": "192.168.100.8", 00:15:47.149 "trsvcid": "57993" 00:15:47.149 }, 00:15:47.149 "auth": { 00:15:47.149 "state": "completed", 00:15:47.149 "digest": "sha256", 00:15:47.149 "dhgroup": "ffdhe3072" 00:15:47.149 } 00:15:47.149 } 00:15:47.149 ]' 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.149 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.407 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.407 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.407 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.407 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.407 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.664 13:46:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.228 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.485 13:46:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.743 00:15:48.743 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.743 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.743 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.003 { 00:15:49.003 "cntlid": 19, 00:15:49.003 "qid": 0, 00:15:49.003 "state": "enabled", 00:15:49.003 "thread": "nvmf_tgt_poll_group_000", 00:15:49.003 "listen_address": { 00:15:49.003 "trtype": "RDMA", 00:15:49.003 "adrfam": "IPv4", 00:15:49.003 "traddr": "192.168.100.8", 00:15:49.003 "trsvcid": "4420" 00:15:49.003 }, 00:15:49.003 "peer_address": { 00:15:49.003 "trtype": "RDMA", 00:15:49.003 "adrfam": "IPv4", 00:15:49.003 "traddr": "192.168.100.8", 00:15:49.003 "trsvcid": "41678" 00:15:49.003 }, 00:15:49.003 "auth": { 00:15:49.003 "state": "completed", 00:15:49.003 "digest": "sha256", 00:15:49.003 "dhgroup": "ffdhe3072" 00:15:49.003 } 00:15:49.003 } 00:15:49.003 ]' 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.003 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.260 13:46:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:15:49.826 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.826 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:49.826 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.826 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.085 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.343 00:15:50.343 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.343 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.343 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.600 13:46:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.600 { 00:15:50.600 "cntlid": 21, 00:15:50.600 "qid": 0, 00:15:50.600 "state": "enabled", 00:15:50.600 "thread": "nvmf_tgt_poll_group_000", 00:15:50.600 "listen_address": { 00:15:50.600 "trtype": "RDMA", 00:15:50.600 "adrfam": "IPv4", 00:15:50.600 "traddr": "192.168.100.8", 00:15:50.600 "trsvcid": "4420" 00:15:50.600 }, 00:15:50.600 "peer_address": { 00:15:50.600 "trtype": "RDMA", 00:15:50.600 "adrfam": "IPv4", 00:15:50.600 "traddr": "192.168.100.8", 00:15:50.600 "trsvcid": "44303" 00:15:50.600 }, 00:15:50.600 "auth": { 00:15:50.600 "state": "completed", 00:15:50.600 "digest": "sha256", 00:15:50.600 "dhgroup": "ffdhe3072" 00:15:50.600 } 00:15:50.600 } 00:15:50.600 ]' 00:15:50.600 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.600 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.600 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.600 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.600 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.857 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.858 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.858 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.858 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:15:51.425 13:46:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.768 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.769 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.026 00:15:52.026 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.026 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.026 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.285 { 00:15:52.285 "cntlid": 23, 00:15:52.285 "qid": 0, 00:15:52.285 "state": "enabled", 00:15:52.285 "thread": "nvmf_tgt_poll_group_000", 00:15:52.285 "listen_address": { 00:15:52.285 "trtype": "RDMA", 00:15:52.285 "adrfam": "IPv4", 00:15:52.285 "traddr": "192.168.100.8", 00:15:52.285 "trsvcid": "4420" 00:15:52.285 }, 00:15:52.285 "peer_address": { 00:15:52.285 "trtype": "RDMA", 00:15:52.285 "adrfam": "IPv4", 00:15:52.285 "traddr": "192.168.100.8", 00:15:52.285 "trsvcid": "46074" 00:15:52.285 }, 00:15:52.285 "auth": { 00:15:52.285 "state": "completed", 00:15:52.285 "digest": "sha256", 00:15:52.285 "dhgroup": "ffdhe3072" 00:15:52.285 } 00:15:52.285 } 00:15:52.285 ]' 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.285 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.544 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.545 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.545 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.545 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.545 13:46:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.545 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.480 13:46:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.739 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.998 { 00:15:53.998 "cntlid": 25, 00:15:53.998 "qid": 0, 00:15:53.998 "state": "enabled", 00:15:53.998 "thread": "nvmf_tgt_poll_group_000", 00:15:53.998 "listen_address": { 00:15:53.998 "trtype": "RDMA", 00:15:53.998 "adrfam": "IPv4", 00:15:53.998 "traddr": "192.168.100.8", 00:15:53.998 "trsvcid": "4420" 00:15:53.998 }, 00:15:53.998 "peer_address": { 00:15:53.998 "trtype": "RDMA", 00:15:53.998 "adrfam": "IPv4", 00:15:53.998 "traddr": "192.168.100.8", 00:15:53.998 "trsvcid": "33063" 00:15:53.998 }, 00:15:53.998 "auth": { 00:15:53.998 "state": "completed", 00:15:53.998 "digest": "sha256", 00:15:53.998 "dhgroup": "ffdhe4096" 00:15:53.998 } 00:15:53.998 } 00:15:53.998 ]' 00:15:53.998 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.257 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.515 13:46:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.083 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.342 13:46:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.600 00:15:55.600 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.600 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.600 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.859 { 00:15:55.859 "cntlid": 27, 00:15:55.859 "qid": 0, 00:15:55.859 "state": "enabled", 00:15:55.859 "thread": "nvmf_tgt_poll_group_000", 00:15:55.859 "listen_address": { 00:15:55.859 "trtype": "RDMA", 00:15:55.859 "adrfam": "IPv4", 00:15:55.859 "traddr": "192.168.100.8", 00:15:55.859 "trsvcid": "4420" 00:15:55.859 }, 00:15:55.859 "peer_address": { 00:15:55.859 "trtype": "RDMA", 00:15:55.859 "adrfam": "IPv4", 00:15:55.859 "traddr": "192.168.100.8", 00:15:55.859 "trsvcid": "56401" 00:15:55.859 }, 00:15:55.859 "auth": { 00:15:55.859 "state": "completed", 00:15:55.859 "digest": "sha256", 00:15:55.859 "dhgroup": "ffdhe4096" 00:15:55.859 } 00:15:55.859 } 00:15:55.859 ]' 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.859 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.118 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.118 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.118 13:46:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:15:56.684 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.942 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.202 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.461 00:15:57.461 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.461 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.461 13:46:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.720 { 00:15:57.720 "cntlid": 29, 00:15:57.720 "qid": 0, 00:15:57.720 "state": "enabled", 00:15:57.720 "thread": "nvmf_tgt_poll_group_000", 00:15:57.720 "listen_address": { 00:15:57.720 "trtype": "RDMA", 00:15:57.720 "adrfam": "IPv4", 00:15:57.720 "traddr": "192.168.100.8", 00:15:57.720 "trsvcid": "4420" 00:15:57.720 }, 00:15:57.720 "peer_address": { 00:15:57.720 "trtype": "RDMA", 00:15:57.720 "adrfam": "IPv4", 00:15:57.720 "traddr": "192.168.100.8", 00:15:57.720 "trsvcid": "48427" 00:15:57.720 }, 00:15:57.720 "auth": { 00:15:57.720 "state": "completed", 00:15:57.720 "digest": "sha256", 00:15:57.720 "dhgroup": "ffdhe4096" 00:15:57.720 } 00:15:57.720 } 00:15:57.720 ]' 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.720 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.978 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:15:58.545 13:46:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.804 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.063 00:15:59.063 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.063 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.063 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.322 { 00:15:59.322 "cntlid": 31, 00:15:59.322 "qid": 0, 00:15:59.322 "state": "enabled", 00:15:59.322 "thread": "nvmf_tgt_poll_group_000", 00:15:59.322 "listen_address": { 00:15:59.322 "trtype": "RDMA", 00:15:59.322 "adrfam": "IPv4", 00:15:59.322 "traddr": "192.168.100.8", 00:15:59.322 "trsvcid": "4420" 00:15:59.322 }, 00:15:59.322 "peer_address": { 00:15:59.322 "trtype": "RDMA", 00:15:59.322 "adrfam": "IPv4", 00:15:59.322 "traddr": "192.168.100.8", 00:15:59.322 "trsvcid": "51617" 00:15:59.322 }, 00:15:59.322 "auth": { 00:15:59.322 "state": "completed", 00:15:59.322 "digest": "sha256", 00:15:59.322 "dhgroup": "ffdhe4096" 00:15:59.322 } 00:15:59.322 } 00:15:59.322 ]' 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.322 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.581 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.581 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.581 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.581 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.581 13:46:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.581 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.515 13:46:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.515 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.516 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.516 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.516 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.516 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.516 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.080 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.080 { 00:16:01.080 "cntlid": 33, 00:16:01.080 "qid": 0, 00:16:01.080 "state": "enabled", 00:16:01.080 "thread": "nvmf_tgt_poll_group_000", 00:16:01.080 "listen_address": { 00:16:01.080 "trtype": "RDMA", 00:16:01.080 "adrfam": "IPv4", 00:16:01.080 "traddr": "192.168.100.8", 00:16:01.080 "trsvcid": "4420" 00:16:01.080 }, 00:16:01.080 "peer_address": { 00:16:01.080 "trtype": "RDMA", 00:16:01.080 "adrfam": "IPv4", 00:16:01.080 "traddr": "192.168.100.8", 00:16:01.080 "trsvcid": "56810" 00:16:01.080 }, 00:16:01.080 "auth": { 00:16:01.080 "state": "completed", 00:16:01.080 "digest": "sha256", 00:16:01.080 "dhgroup": "ffdhe6144" 00:16:01.080 } 00:16:01.080 } 00:16:01.080 ]' 00:16:01.080 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.339 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.597 13:46:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.164 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.422 13:46:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.681 00:16:02.681 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.681 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.681 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.941 { 00:16:02.941 "cntlid": 35, 00:16:02.941 "qid": 0, 00:16:02.941 "state": "enabled", 00:16:02.941 "thread": "nvmf_tgt_poll_group_000", 00:16:02.941 "listen_address": { 00:16:02.941 "trtype": "RDMA", 00:16:02.941 "adrfam": "IPv4", 00:16:02.941 "traddr": "192.168.100.8", 00:16:02.941 "trsvcid": "4420" 00:16:02.941 }, 00:16:02.941 "peer_address": { 00:16:02.941 "trtype": "RDMA", 00:16:02.941 "adrfam": "IPv4", 00:16:02.941 "traddr": "192.168.100.8", 00:16:02.941 "trsvcid": "57509" 00:16:02.941 }, 00:16:02.941 "auth": { 00:16:02.941 "state": "completed", 00:16:02.941 "digest": "sha256", 00:16:02.941 "dhgroup": "ffdhe6144" 00:16:02.941 } 00:16:02.941 } 00:16:02.941 ]' 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.941 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.200 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.200 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.200 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.200 13:46:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.136 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.702 00:16:04.702 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.702 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.702 13:46:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.702 { 00:16:04.702 "cntlid": 37, 00:16:04.702 "qid": 0, 00:16:04.702 "state": "enabled", 00:16:04.702 "thread": "nvmf_tgt_poll_group_000", 00:16:04.702 "listen_address": { 00:16:04.702 "trtype": "RDMA", 00:16:04.702 "adrfam": "IPv4", 00:16:04.702 "traddr": "192.168.100.8", 00:16:04.702 "trsvcid": "4420" 00:16:04.702 }, 00:16:04.702 "peer_address": { 00:16:04.702 "trtype": "RDMA", 00:16:04.702 "adrfam": "IPv4", 00:16:04.702 "traddr": "192.168.100.8", 00:16:04.702 "trsvcid": "36371" 00:16:04.702 }, 00:16:04.702 "auth": { 00:16:04.702 "state": "completed", 00:16:04.702 "digest": "sha256", 00:16:04.702 "dhgroup": "ffdhe6144" 00:16:04.702 } 00:16:04.702 } 00:16:04.702 ]' 00:16:04.702 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.961 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.961 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.961 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.962 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.962 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.962 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.962 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.220 13:46:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.788 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.047 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.305 00:16:06.305 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.305 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.305 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.565 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.565 13:46:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.565 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.565 13:46:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.565 { 00:16:06.565 "cntlid": 39, 00:16:06.565 "qid": 0, 00:16:06.565 "state": "enabled", 00:16:06.565 "thread": "nvmf_tgt_poll_group_000", 00:16:06.565 "listen_address": { 00:16:06.565 "trtype": "RDMA", 00:16:06.565 "adrfam": "IPv4", 00:16:06.565 "traddr": "192.168.100.8", 00:16:06.565 "trsvcid": "4420" 00:16:06.565 }, 00:16:06.565 "peer_address": { 00:16:06.565 "trtype": "RDMA", 00:16:06.565 "adrfam": "IPv4", 00:16:06.565 "traddr": "192.168.100.8", 00:16:06.565 "trsvcid": "59819" 00:16:06.565 }, 00:16:06.565 "auth": { 00:16:06.565 "state": "completed", 00:16:06.565 "digest": "sha256", 00:16:06.565 "dhgroup": "ffdhe6144" 00:16:06.565 } 00:16:06.565 } 00:16:06.565 ]' 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:06.565 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.838 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.838 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.838 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.838 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:07.427 13:46:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.687 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.947 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.206 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.466 { 00:16:08.466 "cntlid": 41, 00:16:08.466 "qid": 0, 00:16:08.466 "state": "enabled", 00:16:08.466 "thread": "nvmf_tgt_poll_group_000", 00:16:08.466 "listen_address": { 00:16:08.466 "trtype": "RDMA", 00:16:08.466 "adrfam": "IPv4", 00:16:08.466 "traddr": "192.168.100.8", 00:16:08.466 "trsvcid": "4420" 00:16:08.466 }, 00:16:08.466 "peer_address": { 00:16:08.466 "trtype": "RDMA", 00:16:08.466 "adrfam": "IPv4", 00:16:08.466 "traddr": "192.168.100.8", 00:16:08.466 "trsvcid": "44367" 00:16:08.466 }, 00:16:08.466 "auth": { 00:16:08.466 "state": "completed", 00:16:08.466 "digest": "sha256", 00:16:08.466 "dhgroup": "ffdhe8192" 00:16:08.466 } 00:16:08.466 } 00:16:08.466 ]' 00:16:08.466 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.725 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.725 13:46:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.725 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.725 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.725 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.725 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.725 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.984 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:09.552 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.552 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:09.552 13:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.552 13:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.553 13:46:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.553 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.553 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.553 13:46:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:09.811 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:09.811 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.812 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.380 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.380 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.640 { 00:16:10.640 "cntlid": 43, 00:16:10.640 "qid": 0, 00:16:10.640 "state": "enabled", 00:16:10.640 "thread": "nvmf_tgt_poll_group_000", 00:16:10.640 "listen_address": { 00:16:10.640 "trtype": "RDMA", 00:16:10.640 "adrfam": "IPv4", 00:16:10.640 "traddr": "192.168.100.8", 00:16:10.640 "trsvcid": "4420" 00:16:10.640 }, 00:16:10.640 "peer_address": { 00:16:10.640 "trtype": "RDMA", 00:16:10.640 "adrfam": "IPv4", 00:16:10.640 "traddr": "192.168.100.8", 00:16:10.640 "trsvcid": "45303" 00:16:10.640 }, 00:16:10.640 "auth": { 00:16:10.640 "state": "completed", 00:16:10.640 "digest": "sha256", 00:16:10.640 "dhgroup": "ffdhe8192" 00:16:10.640 } 00:16:10.640 } 00:16:10.640 ]' 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.640 13:46:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.640 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.640 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.640 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.899 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.466 13:46:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.725 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.292 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.292 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.550 { 00:16:12.550 "cntlid": 45, 00:16:12.550 "qid": 0, 00:16:12.550 "state": "enabled", 00:16:12.550 "thread": "nvmf_tgt_poll_group_000", 00:16:12.550 "listen_address": { 00:16:12.550 "trtype": "RDMA", 00:16:12.550 "adrfam": "IPv4", 00:16:12.550 "traddr": "192.168.100.8", 00:16:12.550 "trsvcid": "4420" 00:16:12.550 }, 00:16:12.550 "peer_address": { 00:16:12.550 "trtype": "RDMA", 00:16:12.550 "adrfam": "IPv4", 00:16:12.550 "traddr": "192.168.100.8", 00:16:12.550 "trsvcid": "54809" 00:16:12.550 }, 00:16:12.550 "auth": { 00:16:12.550 "state": "completed", 00:16:12.550 "digest": "sha256", 00:16:12.550 "dhgroup": "ffdhe8192" 00:16:12.550 } 00:16:12.550 } 00:16:12.550 ]' 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.550 13:46:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.809 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:13.377 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.637 13:46:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.637 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.291 00:16:14.291 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.291 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.291 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.549 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.549 { 00:16:14.549 "cntlid": 47, 00:16:14.549 "qid": 0, 00:16:14.549 "state": "enabled", 00:16:14.549 "thread": "nvmf_tgt_poll_group_000", 00:16:14.549 "listen_address": { 00:16:14.549 "trtype": "RDMA", 00:16:14.549 "adrfam": "IPv4", 00:16:14.549 "traddr": "192.168.100.8", 00:16:14.549 "trsvcid": "4420" 00:16:14.549 }, 00:16:14.549 "peer_address": { 00:16:14.550 "trtype": "RDMA", 00:16:14.550 "adrfam": "IPv4", 00:16:14.550 "traddr": "192.168.100.8", 00:16:14.550 "trsvcid": "50294" 00:16:14.550 }, 00:16:14.550 "auth": { 00:16:14.550 "state": "completed", 00:16:14.550 "digest": "sha256", 00:16:14.550 "dhgroup": "ffdhe8192" 00:16:14.550 } 00:16:14.550 } 00:16:14.550 ]' 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.550 13:46:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.809 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:15.377 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:15.634 13:46:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.634 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.891 00:16:15.891 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.891 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.891 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.148 { 00:16:16.148 "cntlid": 49, 00:16:16.148 "qid": 0, 00:16:16.148 "state": "enabled", 00:16:16.148 "thread": "nvmf_tgt_poll_group_000", 00:16:16.148 "listen_address": { 00:16:16.148 "trtype": "RDMA", 00:16:16.148 "adrfam": "IPv4", 00:16:16.148 "traddr": "192.168.100.8", 00:16:16.148 "trsvcid": "4420" 00:16:16.148 }, 00:16:16.148 "peer_address": { 00:16:16.148 "trtype": "RDMA", 00:16:16.148 "adrfam": "IPv4", 00:16:16.148 "traddr": "192.168.100.8", 00:16:16.148 "trsvcid": "48271" 00:16:16.148 }, 00:16:16.148 "auth": { 00:16:16.148 "state": "completed", 00:16:16.148 "digest": "sha384", 00:16:16.148 "dhgroup": "null" 00:16:16.148 } 00:16:16.148 } 00:16:16.148 ]' 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.148 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.407 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.407 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.407 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.407 13:46:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.343 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.601 13:46:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.601 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.601 13:46:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.601 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.859 { 00:16:17.859 "cntlid": 51, 00:16:17.859 "qid": 0, 00:16:17.859 "state": "enabled", 00:16:17.859 "thread": "nvmf_tgt_poll_group_000", 00:16:17.859 "listen_address": { 00:16:17.859 "trtype": "RDMA", 00:16:17.859 "adrfam": "IPv4", 00:16:17.859 "traddr": "192.168.100.8", 00:16:17.859 "trsvcid": "4420" 00:16:17.859 }, 00:16:17.859 "peer_address": { 00:16:17.859 "trtype": "RDMA", 00:16:17.859 "adrfam": "IPv4", 00:16:17.859 "traddr": "192.168.100.8", 00:16:17.859 "trsvcid": "36934" 00:16:17.859 }, 00:16:17.859 "auth": { 00:16:17.859 "state": "completed", 00:16:17.859 "digest": "sha384", 00:16:17.859 "dhgroup": "null" 00:16:17.859 } 00:16:17.859 } 00:16:17.859 ]' 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.859 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.118 13:46:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.055 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.316 00:16:19.575 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.575 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.575 13:46:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.575 { 00:16:19.575 "cntlid": 53, 00:16:19.575 "qid": 0, 00:16:19.575 "state": "enabled", 00:16:19.575 "thread": "nvmf_tgt_poll_group_000", 00:16:19.575 "listen_address": { 00:16:19.575 "trtype": "RDMA", 00:16:19.575 "adrfam": "IPv4", 00:16:19.575 "traddr": "192.168.100.8", 00:16:19.575 "trsvcid": "4420" 00:16:19.575 }, 00:16:19.575 "peer_address": { 00:16:19.575 "trtype": "RDMA", 00:16:19.575 "adrfam": "IPv4", 00:16:19.575 "traddr": "192.168.100.8", 00:16:19.575 "trsvcid": "45141" 00:16:19.575 }, 00:16:19.575 "auth": { 00:16:19.575 "state": "completed", 00:16:19.575 "digest": "sha384", 00:16:19.575 "dhgroup": "null" 00:16:19.575 } 00:16:19.575 } 00:16:19.575 ]' 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.575 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.834 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.124 13:46:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.694 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:20.953 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.954 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.954 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.954 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.954 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.213 00:16:21.213 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.213 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.213 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.471 { 00:16:21.471 "cntlid": 55, 00:16:21.471 "qid": 0, 00:16:21.471 "state": "enabled", 00:16:21.471 "thread": "nvmf_tgt_poll_group_000", 00:16:21.471 "listen_address": { 00:16:21.471 "trtype": "RDMA", 00:16:21.471 "adrfam": "IPv4", 00:16:21.471 "traddr": "192.168.100.8", 00:16:21.471 "trsvcid": "4420" 00:16:21.471 }, 00:16:21.471 "peer_address": { 00:16:21.471 "trtype": "RDMA", 00:16:21.471 "adrfam": "IPv4", 00:16:21.471 "traddr": "192.168.100.8", 00:16:21.471 "trsvcid": "37027" 00:16:21.471 }, 00:16:21.471 "auth": { 00:16:21.471 "state": "completed", 00:16:21.471 "digest": "sha384", 00:16:21.471 "dhgroup": "null" 00:16:21.471 } 00:16:21.471 } 00:16:21.471 ]' 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.471 13:46:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.730 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:22.298 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.299 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:22.299 13:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.299 13:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.559 13:46:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.559 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.559 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.559 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.559 13:46:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.559 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.818 00:16:22.818 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.818 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.818 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.076 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.076 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.076 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.076 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.076 13:46:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.077 { 00:16:23.077 "cntlid": 57, 00:16:23.077 "qid": 0, 00:16:23.077 "state": "enabled", 00:16:23.077 "thread": "nvmf_tgt_poll_group_000", 00:16:23.077 "listen_address": { 00:16:23.077 "trtype": "RDMA", 00:16:23.077 "adrfam": "IPv4", 00:16:23.077 "traddr": "192.168.100.8", 00:16:23.077 "trsvcid": "4420" 00:16:23.077 }, 00:16:23.077 "peer_address": { 00:16:23.077 "trtype": "RDMA", 00:16:23.077 "adrfam": "IPv4", 00:16:23.077 "traddr": "192.168.100.8", 00:16:23.077 "trsvcid": "44984" 00:16:23.077 }, 00:16:23.077 "auth": { 00:16:23.077 "state": "completed", 00:16:23.077 "digest": "sha384", 00:16:23.077 "dhgroup": "ffdhe2048" 00:16:23.077 } 00:16:23.077 } 00:16:23.077 ]' 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.077 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.334 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.334 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.334 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.334 13:46:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.270 13:46:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.530 00:16:24.530 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.530 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.530 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.789 { 00:16:24.789 "cntlid": 59, 00:16:24.789 "qid": 0, 00:16:24.789 "state": "enabled", 00:16:24.789 "thread": "nvmf_tgt_poll_group_000", 00:16:24.789 "listen_address": { 00:16:24.789 "trtype": "RDMA", 00:16:24.789 "adrfam": "IPv4", 00:16:24.789 "traddr": "192.168.100.8", 00:16:24.789 "trsvcid": "4420" 00:16:24.789 }, 00:16:24.789 "peer_address": { 00:16:24.789 "trtype": "RDMA", 00:16:24.789 "adrfam": "IPv4", 00:16:24.789 "traddr": "192.168.100.8", 00:16:24.789 "trsvcid": "40494" 00:16:24.789 }, 00:16:24.789 "auth": { 00:16:24.789 "state": "completed", 00:16:24.789 "digest": "sha384", 00:16:24.789 "dhgroup": "ffdhe2048" 00:16:24.789 } 00:16:24.789 } 00:16:24.789 ]' 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.789 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.048 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.048 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.048 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.048 13:46:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.985 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.245 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.245 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.505 { 00:16:26.505 "cntlid": 61, 00:16:26.505 "qid": 0, 00:16:26.505 "state": "enabled", 00:16:26.505 "thread": "nvmf_tgt_poll_group_000", 00:16:26.505 "listen_address": { 00:16:26.505 "trtype": "RDMA", 00:16:26.505 "adrfam": "IPv4", 00:16:26.505 "traddr": "192.168.100.8", 00:16:26.505 "trsvcid": "4420" 00:16:26.505 }, 00:16:26.505 "peer_address": { 00:16:26.505 "trtype": "RDMA", 00:16:26.505 "adrfam": "IPv4", 00:16:26.505 "traddr": "192.168.100.8", 00:16:26.505 "trsvcid": "46169" 00:16:26.505 }, 00:16:26.505 "auth": { 00:16:26.505 "state": "completed", 00:16:26.505 "digest": "sha384", 00:16:26.505 "dhgroup": "ffdhe2048" 00:16:26.505 } 00:16:26.505 } 00:16:26.505 ]' 00:16:26.505 13:46:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.505 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.505 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.764 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.764 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.764 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.764 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.764 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.022 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:27.591 13:46:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.591 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.850 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.109 00:16:28.109 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.109 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.109 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.368 { 00:16:28.368 "cntlid": 63, 00:16:28.368 "qid": 0, 00:16:28.368 "state": "enabled", 00:16:28.368 "thread": "nvmf_tgt_poll_group_000", 00:16:28.368 "listen_address": { 00:16:28.368 "trtype": "RDMA", 00:16:28.368 "adrfam": "IPv4", 00:16:28.368 "traddr": "192.168.100.8", 00:16:28.368 "trsvcid": "4420" 00:16:28.368 }, 00:16:28.368 "peer_address": { 00:16:28.368 "trtype": "RDMA", 00:16:28.368 "adrfam": "IPv4", 00:16:28.368 "traddr": "192.168.100.8", 00:16:28.368 "trsvcid": "44829" 00:16:28.368 }, 00:16:28.368 "auth": { 00:16:28.368 "state": "completed", 00:16:28.368 "digest": "sha384", 00:16:28.368 "dhgroup": "ffdhe2048" 00:16:28.368 } 00:16:28.368 } 00:16:28.368 ]' 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.368 13:46:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.626 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:29.194 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.452 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.712 13:46:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.712 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.712 13:46:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.712 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.971 { 00:16:29.971 "cntlid": 65, 00:16:29.971 "qid": 0, 00:16:29.971 "state": "enabled", 00:16:29.971 "thread": "nvmf_tgt_poll_group_000", 00:16:29.971 "listen_address": { 00:16:29.971 "trtype": "RDMA", 00:16:29.971 "adrfam": "IPv4", 00:16:29.971 "traddr": "192.168.100.8", 00:16:29.971 "trsvcid": "4420" 00:16:29.971 }, 00:16:29.971 "peer_address": { 00:16:29.971 "trtype": "RDMA", 00:16:29.971 "adrfam": "IPv4", 00:16:29.971 "traddr": "192.168.100.8", 00:16:29.971 "trsvcid": "35066" 00:16:29.971 }, 00:16:29.971 "auth": { 00:16:29.971 "state": "completed", 00:16:29.971 "digest": "sha384", 00:16:29.971 "dhgroup": "ffdhe3072" 00:16:29.971 } 00:16:29.971 } 00:16:29.971 ]' 00:16:29.971 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.229 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.486 13:46:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.052 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.311 13:46:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.570 00:16:31.570 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.570 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.570 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.828 { 00:16:31.828 "cntlid": 67, 00:16:31.828 "qid": 0, 00:16:31.828 "state": "enabled", 00:16:31.828 "thread": "nvmf_tgt_poll_group_000", 00:16:31.828 "listen_address": { 00:16:31.828 "trtype": "RDMA", 00:16:31.828 "adrfam": "IPv4", 00:16:31.828 "traddr": "192.168.100.8", 00:16:31.828 "trsvcid": "4420" 00:16:31.828 }, 00:16:31.828 "peer_address": { 00:16:31.828 "trtype": "RDMA", 00:16:31.828 "adrfam": "IPv4", 00:16:31.828 "traddr": "192.168.100.8", 00:16:31.828 "trsvcid": "55984" 00:16:31.828 }, 00:16:31.828 "auth": { 00:16:31.828 "state": "completed", 00:16:31.828 "digest": "sha384", 00:16:31.828 "dhgroup": "ffdhe3072" 00:16:31.828 } 00:16:31.828 } 00:16:31.828 ]' 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.828 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.086 13:46:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:32.653 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.911 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.170 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.171 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.429 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.429 { 00:16:33.429 "cntlid": 69, 00:16:33.429 "qid": 0, 00:16:33.429 "state": "enabled", 00:16:33.429 "thread": "nvmf_tgt_poll_group_000", 00:16:33.429 "listen_address": { 00:16:33.429 "trtype": "RDMA", 00:16:33.429 "adrfam": "IPv4", 00:16:33.429 "traddr": "192.168.100.8", 00:16:33.429 "trsvcid": "4420" 00:16:33.429 }, 00:16:33.429 "peer_address": { 00:16:33.429 "trtype": "RDMA", 00:16:33.429 "adrfam": "IPv4", 00:16:33.429 "traddr": "192.168.100.8", 00:16:33.429 "trsvcid": "34357" 00:16:33.429 }, 00:16:33.429 "auth": { 00:16:33.429 "state": "completed", 00:16:33.429 "digest": "sha384", 00:16:33.429 "dhgroup": "ffdhe3072" 00:16:33.429 } 00:16:33.429 } 00:16:33.429 ]' 00:16:33.429 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.688 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.688 13:46:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.688 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.688 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.688 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.688 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.688 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.947 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:34.514 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.514 13:47:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:34.514 13:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.514 13:47:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.514 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.514 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.514 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.514 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.773 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.041 00:16:35.041 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.041 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.041 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.386 { 00:16:35.386 "cntlid": 71, 00:16:35.386 "qid": 0, 00:16:35.386 "state": "enabled", 00:16:35.386 "thread": "nvmf_tgt_poll_group_000", 00:16:35.386 "listen_address": { 00:16:35.386 "trtype": "RDMA", 00:16:35.386 "adrfam": "IPv4", 00:16:35.386 "traddr": "192.168.100.8", 00:16:35.386 "trsvcid": "4420" 00:16:35.386 }, 00:16:35.386 "peer_address": { 00:16:35.386 "trtype": "RDMA", 00:16:35.386 "adrfam": "IPv4", 00:16:35.386 "traddr": "192.168.100.8", 00:16:35.386 "trsvcid": "50521" 00:16:35.386 }, 00:16:35.386 "auth": { 00:16:35.386 "state": "completed", 00:16:35.386 "digest": "sha384", 00:16:35.386 "dhgroup": "ffdhe3072" 00:16:35.386 } 00:16:35.386 } 00:16:35.386 ]' 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.386 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.662 13:47:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.229 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.488 13:47:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.747 00:16:36.747 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.747 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.747 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.005 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.006 { 00:16:37.006 "cntlid": 73, 00:16:37.006 "qid": 0, 00:16:37.006 "state": "enabled", 00:16:37.006 "thread": "nvmf_tgt_poll_group_000", 00:16:37.006 "listen_address": { 00:16:37.006 "trtype": "RDMA", 00:16:37.006 "adrfam": "IPv4", 00:16:37.006 "traddr": "192.168.100.8", 00:16:37.006 "trsvcid": "4420" 00:16:37.006 }, 00:16:37.006 "peer_address": { 00:16:37.006 "trtype": "RDMA", 00:16:37.006 "adrfam": "IPv4", 00:16:37.006 "traddr": "192.168.100.8", 00:16:37.006 "trsvcid": "56238" 00:16:37.006 }, 00:16:37.006 "auth": { 00:16:37.006 "state": "completed", 00:16:37.006 "digest": "sha384", 00:16:37.006 "dhgroup": "ffdhe4096" 00:16:37.006 } 00:16:37.006 } 00:16:37.006 ]' 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.006 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.266 13:47:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:37.834 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.093 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.351 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.352 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.609 00:16:38.609 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.609 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.609 13:47:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.868 { 00:16:38.868 "cntlid": 75, 00:16:38.868 "qid": 0, 00:16:38.868 "state": "enabled", 00:16:38.868 "thread": "nvmf_tgt_poll_group_000", 00:16:38.868 "listen_address": { 00:16:38.868 "trtype": "RDMA", 00:16:38.868 "adrfam": "IPv4", 00:16:38.868 "traddr": "192.168.100.8", 00:16:38.868 "trsvcid": "4420" 00:16:38.868 }, 00:16:38.868 "peer_address": { 00:16:38.868 "trtype": "RDMA", 00:16:38.868 "adrfam": "IPv4", 00:16:38.868 "traddr": "192.168.100.8", 00:16:38.868 "trsvcid": "51435" 00:16:38.868 }, 00:16:38.868 "auth": { 00:16:38.868 "state": "completed", 00:16:38.868 "digest": "sha384", 00:16:38.868 "dhgroup": "ffdhe4096" 00:16:38.868 } 00:16:38.868 } 00:16:38.868 ]' 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.868 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.127 13:47:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.694 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.953 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.212 00:16:40.212 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.212 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.212 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.471 { 00:16:40.471 "cntlid": 77, 00:16:40.471 "qid": 0, 00:16:40.471 "state": "enabled", 00:16:40.471 "thread": "nvmf_tgt_poll_group_000", 00:16:40.471 "listen_address": { 00:16:40.471 "trtype": "RDMA", 00:16:40.471 "adrfam": "IPv4", 00:16:40.471 "traddr": "192.168.100.8", 00:16:40.471 "trsvcid": "4420" 00:16:40.471 }, 00:16:40.471 "peer_address": { 00:16:40.471 "trtype": "RDMA", 00:16:40.471 "adrfam": "IPv4", 00:16:40.471 "traddr": "192.168.100.8", 00:16:40.471 "trsvcid": "48787" 00:16:40.471 }, 00:16:40.471 "auth": { 00:16:40.471 "state": "completed", 00:16:40.471 "digest": "sha384", 00:16:40.471 "dhgroup": "ffdhe4096" 00:16:40.471 } 00:16:40.471 } 00:16:40.471 ]' 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.471 13:47:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.729 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.729 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.729 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.729 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.664 13:47:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.664 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.921 00:16:41.921 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.921 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.921 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.179 { 00:16:42.179 "cntlid": 79, 00:16:42.179 "qid": 0, 00:16:42.179 "state": "enabled", 00:16:42.179 "thread": "nvmf_tgt_poll_group_000", 00:16:42.179 "listen_address": { 00:16:42.179 "trtype": "RDMA", 00:16:42.179 "adrfam": "IPv4", 00:16:42.179 "traddr": "192.168.100.8", 00:16:42.179 "trsvcid": "4420" 00:16:42.179 }, 00:16:42.179 "peer_address": { 00:16:42.179 "trtype": "RDMA", 00:16:42.179 "adrfam": "IPv4", 00:16:42.179 "traddr": "192.168.100.8", 00:16:42.179 "trsvcid": "54825" 00:16:42.179 }, 00:16:42.179 "auth": { 00:16:42.179 "state": "completed", 00:16:42.179 "digest": "sha384", 00:16:42.179 "dhgroup": "ffdhe4096" 00:16:42.179 } 00:16:42.179 } 00:16:42.179 ]' 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.179 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.438 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.438 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.438 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.438 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.438 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.696 13:47:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.263 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.521 13:47:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.779 00:16:43.779 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.779 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.779 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.037 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.037 { 00:16:44.037 "cntlid": 81, 00:16:44.037 "qid": 0, 00:16:44.037 "state": "enabled", 00:16:44.037 "thread": "nvmf_tgt_poll_group_000", 00:16:44.037 "listen_address": { 00:16:44.037 "trtype": "RDMA", 00:16:44.037 "adrfam": "IPv4", 00:16:44.037 "traddr": "192.168.100.8", 00:16:44.037 "trsvcid": "4420" 00:16:44.037 }, 00:16:44.037 "peer_address": { 00:16:44.037 "trtype": "RDMA", 00:16:44.037 "adrfam": "IPv4", 00:16:44.037 "traddr": "192.168.100.8", 00:16:44.037 "trsvcid": "53466" 00:16:44.037 }, 00:16:44.037 "auth": { 00:16:44.037 "state": "completed", 00:16:44.037 "digest": "sha384", 00:16:44.037 "dhgroup": "ffdhe6144" 00:16:44.037 } 00:16:44.037 } 00:16:44.038 ]' 00:16:44.038 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.038 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.038 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.038 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.038 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.296 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.296 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.296 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.296 13:47:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.229 13:47:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.795 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.795 { 00:16:45.795 "cntlid": 83, 00:16:45.795 "qid": 0, 00:16:45.795 "state": "enabled", 00:16:45.795 "thread": "nvmf_tgt_poll_group_000", 00:16:45.795 "listen_address": { 00:16:45.795 "trtype": "RDMA", 00:16:45.795 "adrfam": "IPv4", 00:16:45.795 "traddr": "192.168.100.8", 00:16:45.795 "trsvcid": "4420" 00:16:45.795 }, 00:16:45.795 "peer_address": { 00:16:45.795 "trtype": "RDMA", 00:16:45.795 "adrfam": "IPv4", 00:16:45.795 "traddr": "192.168.100.8", 00:16:45.795 "trsvcid": "54996" 00:16:45.795 }, 00:16:45.795 "auth": { 00:16:45.795 "state": "completed", 00:16:45.795 "digest": "sha384", 00:16:45.795 "dhgroup": "ffdhe6144" 00:16:45.795 } 00:16:45.795 } 00:16:45.795 ]' 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.795 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.054 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.054 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.054 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.054 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.054 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.312 13:47:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.889 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.148 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.406 00:16:47.406 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.406 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.406 13:47:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.664 { 00:16:47.664 "cntlid": 85, 00:16:47.664 "qid": 0, 00:16:47.664 "state": "enabled", 00:16:47.664 "thread": "nvmf_tgt_poll_group_000", 00:16:47.664 "listen_address": { 00:16:47.664 "trtype": "RDMA", 00:16:47.664 "adrfam": "IPv4", 00:16:47.664 "traddr": "192.168.100.8", 00:16:47.664 "trsvcid": "4420" 00:16:47.664 }, 00:16:47.664 "peer_address": { 00:16:47.664 "trtype": "RDMA", 00:16:47.664 "adrfam": "IPv4", 00:16:47.664 "traddr": "192.168.100.8", 00:16:47.664 "trsvcid": "37895" 00:16:47.664 }, 00:16:47.664 "auth": { 00:16:47.664 "state": "completed", 00:16:47.664 "digest": "sha384", 00:16:47.664 "dhgroup": "ffdhe6144" 00:16:47.664 } 00:16:47.664 } 00:16:47.664 ]' 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.664 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.923 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.923 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.923 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.923 13:47:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:48.491 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.755 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.014 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.272 00:16:49.272 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.272 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.272 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.531 { 00:16:49.531 "cntlid": 87, 00:16:49.531 "qid": 0, 00:16:49.531 "state": "enabled", 00:16:49.531 "thread": "nvmf_tgt_poll_group_000", 00:16:49.531 "listen_address": { 00:16:49.531 "trtype": "RDMA", 00:16:49.531 "adrfam": "IPv4", 00:16:49.531 "traddr": "192.168.100.8", 00:16:49.531 "trsvcid": "4420" 00:16:49.531 }, 00:16:49.531 "peer_address": { 00:16:49.531 "trtype": "RDMA", 00:16:49.531 "adrfam": "IPv4", 00:16:49.531 "traddr": "192.168.100.8", 00:16:49.531 "trsvcid": "52770" 00:16:49.531 }, 00:16:49.531 "auth": { 00:16:49.531 "state": "completed", 00:16:49.531 "digest": "sha384", 00:16:49.531 "dhgroup": "ffdhe6144" 00:16:49.531 } 00:16:49.531 } 00:16:49.531 ]' 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.531 13:47:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.790 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:50.358 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.617 13:47:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.617 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.185 00:16:51.185 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.185 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.185 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.443 { 00:16:51.443 "cntlid": 89, 00:16:51.443 "qid": 0, 00:16:51.443 "state": "enabled", 00:16:51.443 "thread": "nvmf_tgt_poll_group_000", 00:16:51.443 "listen_address": { 00:16:51.443 "trtype": "RDMA", 00:16:51.443 "adrfam": "IPv4", 00:16:51.443 "traddr": "192.168.100.8", 00:16:51.443 "trsvcid": "4420" 00:16:51.443 }, 00:16:51.443 "peer_address": { 00:16:51.443 "trtype": "RDMA", 00:16:51.443 "adrfam": "IPv4", 00:16:51.443 "traddr": "192.168.100.8", 00:16:51.443 "trsvcid": "52715" 00:16:51.443 }, 00:16:51.443 "auth": { 00:16:51.443 "state": "completed", 00:16:51.443 "digest": "sha384", 00:16:51.443 "dhgroup": "ffdhe8192" 00:16:51.443 } 00:16:51.443 } 00:16:51.443 ]' 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.443 13:47:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.703 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:52.270 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.529 13:47:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.788 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.788 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.045 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.304 { 00:16:53.304 "cntlid": 91, 00:16:53.304 "qid": 0, 00:16:53.304 "state": "enabled", 00:16:53.304 "thread": "nvmf_tgt_poll_group_000", 00:16:53.304 "listen_address": { 00:16:53.304 "trtype": "RDMA", 00:16:53.304 "adrfam": "IPv4", 00:16:53.304 "traddr": "192.168.100.8", 00:16:53.304 "trsvcid": "4420" 00:16:53.304 }, 00:16:53.304 "peer_address": { 00:16:53.304 "trtype": "RDMA", 00:16:53.304 "adrfam": "IPv4", 00:16:53.304 "traddr": "192.168.100.8", 00:16:53.304 "trsvcid": "58282" 00:16:53.304 }, 00:16:53.304 "auth": { 00:16:53.304 "state": "completed", 00:16:53.304 "digest": "sha384", 00:16:53.304 "dhgroup": "ffdhe8192" 00:16:53.304 } 00:16:53.304 } 00:16:53.304 ]' 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.304 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.562 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.562 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.562 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.562 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.562 13:47:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.821 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.388 13:47:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.647 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.213 00:16:55.213 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.213 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.213 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.213 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.213 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.472 { 00:16:55.472 "cntlid": 93, 00:16:55.472 "qid": 0, 00:16:55.472 "state": "enabled", 00:16:55.472 "thread": "nvmf_tgt_poll_group_000", 00:16:55.472 "listen_address": { 00:16:55.472 "trtype": "RDMA", 00:16:55.472 "adrfam": "IPv4", 00:16:55.472 "traddr": "192.168.100.8", 00:16:55.472 "trsvcid": "4420" 00:16:55.472 }, 00:16:55.472 "peer_address": { 00:16:55.472 "trtype": "RDMA", 00:16:55.472 "adrfam": "IPv4", 00:16:55.472 "traddr": "192.168.100.8", 00:16:55.472 "trsvcid": "33601" 00:16:55.472 }, 00:16:55.472 "auth": { 00:16:55.472 "state": "completed", 00:16:55.472 "digest": "sha384", 00:16:55.472 "dhgroup": "ffdhe8192" 00:16:55.472 } 00:16:55.472 } 00:16:55.472 ]' 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.472 13:47:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.730 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.299 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.631 13:47:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.631 13:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.631 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.631 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.216 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.216 { 00:16:57.216 "cntlid": 95, 00:16:57.216 "qid": 0, 00:16:57.216 "state": "enabled", 00:16:57.216 "thread": "nvmf_tgt_poll_group_000", 00:16:57.216 "listen_address": { 00:16:57.216 "trtype": "RDMA", 00:16:57.216 "adrfam": "IPv4", 00:16:57.216 "traddr": "192.168.100.8", 00:16:57.216 "trsvcid": "4420" 00:16:57.216 }, 00:16:57.216 "peer_address": { 00:16:57.216 "trtype": "RDMA", 00:16:57.216 "adrfam": "IPv4", 00:16:57.216 "traddr": "192.168.100.8", 00:16:57.216 "trsvcid": "52296" 00:16:57.216 }, 00:16:57.216 "auth": { 00:16:57.216 "state": "completed", 00:16:57.216 "digest": "sha384", 00:16:57.216 "dhgroup": "ffdhe8192" 00:16:57.216 } 00:16:57.216 } 00:16:57.216 ]' 00:16:57.216 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.474 13:47:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.733 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:16:58.299 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.300 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.558 13:47:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.817 00:16:58.817 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.817 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.817 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.075 { 00:16:59.075 "cntlid": 97, 00:16:59.075 "qid": 0, 00:16:59.075 "state": "enabled", 00:16:59.075 "thread": "nvmf_tgt_poll_group_000", 00:16:59.075 "listen_address": { 00:16:59.075 "trtype": "RDMA", 00:16:59.075 "adrfam": "IPv4", 00:16:59.075 "traddr": "192.168.100.8", 00:16:59.075 "trsvcid": "4420" 00:16:59.075 }, 00:16:59.075 "peer_address": { 00:16:59.075 "trtype": "RDMA", 00:16:59.075 "adrfam": "IPv4", 00:16:59.075 "traddr": "192.168.100.8", 00:16:59.075 "trsvcid": "49321" 00:16:59.075 }, 00:16:59.075 "auth": { 00:16:59.075 "state": "completed", 00:16:59.075 "digest": "sha512", 00:16:59.075 "dhgroup": "null" 00:16:59.075 } 00:16:59.075 } 00:16:59.075 ]' 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.075 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.333 13:47:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:16:59.901 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.160 13:47:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.161 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.161 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.420 00:17:00.420 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.420 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.420 13:47:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.678 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.678 { 00:17:00.678 "cntlid": 99, 00:17:00.678 "qid": 0, 00:17:00.678 "state": "enabled", 00:17:00.678 "thread": "nvmf_tgt_poll_group_000", 00:17:00.678 "listen_address": { 00:17:00.678 "trtype": "RDMA", 00:17:00.679 "adrfam": "IPv4", 00:17:00.679 "traddr": "192.168.100.8", 00:17:00.679 "trsvcid": "4420" 00:17:00.679 }, 00:17:00.679 "peer_address": { 00:17:00.679 "trtype": "RDMA", 00:17:00.679 "adrfam": "IPv4", 00:17:00.679 "traddr": "192.168.100.8", 00:17:00.679 "trsvcid": "51282" 00:17:00.679 }, 00:17:00.679 "auth": { 00:17:00.679 "state": "completed", 00:17:00.679 "digest": "sha512", 00:17:00.679 "dhgroup": "null" 00:17:00.679 } 00:17:00.679 } 00:17:00.679 ]' 00:17:00.679 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.679 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.679 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.938 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.938 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.938 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.938 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.938 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.197 13:47:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.765 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.024 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.282 00:17:02.282 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.282 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.282 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.541 { 00:17:02.541 "cntlid": 101, 00:17:02.541 "qid": 0, 00:17:02.541 "state": "enabled", 00:17:02.541 "thread": "nvmf_tgt_poll_group_000", 00:17:02.541 "listen_address": { 00:17:02.541 "trtype": "RDMA", 00:17:02.541 "adrfam": "IPv4", 00:17:02.541 "traddr": "192.168.100.8", 00:17:02.541 "trsvcid": "4420" 00:17:02.541 }, 00:17:02.541 "peer_address": { 00:17:02.541 "trtype": "RDMA", 00:17:02.541 "adrfam": "IPv4", 00:17:02.541 "traddr": "192.168.100.8", 00:17:02.541 "trsvcid": "33422" 00:17:02.541 }, 00:17:02.541 "auth": { 00:17:02.541 "state": "completed", 00:17:02.541 "digest": "sha512", 00:17:02.541 "dhgroup": "null" 00:17:02.541 } 00:17:02.541 } 00:17:02.541 ]' 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:02.541 13:47:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.541 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.541 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.541 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.799 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:03.365 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:03.624 13:47:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.882 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.141 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.141 { 00:17:04.141 "cntlid": 103, 00:17:04.141 "qid": 0, 00:17:04.141 "state": "enabled", 00:17:04.141 "thread": "nvmf_tgt_poll_group_000", 00:17:04.141 "listen_address": { 00:17:04.141 "trtype": "RDMA", 00:17:04.141 "adrfam": "IPv4", 00:17:04.141 "traddr": "192.168.100.8", 00:17:04.141 "trsvcid": "4420" 00:17:04.141 }, 00:17:04.141 "peer_address": { 00:17:04.141 "trtype": "RDMA", 00:17:04.141 "adrfam": "IPv4", 00:17:04.141 "traddr": "192.168.100.8", 00:17:04.141 "trsvcid": "54427" 00:17:04.141 }, 00:17:04.141 "auth": { 00:17:04.141 "state": "completed", 00:17:04.141 "digest": "sha512", 00:17:04.141 "dhgroup": "null" 00:17:04.141 } 00:17:04.141 } 00:17:04.141 ]' 00:17:04.141 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.400 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.659 13:47:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:05.226 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.226 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:05.226 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.227 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.486 13:47:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.757 00:17:05.757 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.757 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.757 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.021 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.021 { 00:17:06.021 "cntlid": 105, 00:17:06.021 "qid": 0, 00:17:06.021 "state": "enabled", 00:17:06.021 "thread": "nvmf_tgt_poll_group_000", 00:17:06.021 "listen_address": { 00:17:06.021 "trtype": "RDMA", 00:17:06.022 "adrfam": "IPv4", 00:17:06.022 "traddr": "192.168.100.8", 00:17:06.022 "trsvcid": "4420" 00:17:06.022 }, 00:17:06.022 "peer_address": { 00:17:06.022 "trtype": "RDMA", 00:17:06.022 "adrfam": "IPv4", 00:17:06.022 "traddr": "192.168.100.8", 00:17:06.022 "trsvcid": "54528" 00:17:06.022 }, 00:17:06.022 "auth": { 00:17:06.022 "state": "completed", 00:17:06.022 "digest": "sha512", 00:17:06.022 "dhgroup": "ffdhe2048" 00:17:06.022 } 00:17:06.022 } 00:17:06.022 ]' 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.022 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.280 13:47:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:06.849 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.117 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.378 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.378 13:47:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.636 { 00:17:07.636 "cntlid": 107, 00:17:07.636 "qid": 0, 00:17:07.636 "state": "enabled", 00:17:07.636 "thread": "nvmf_tgt_poll_group_000", 00:17:07.636 "listen_address": { 00:17:07.636 "trtype": "RDMA", 00:17:07.636 "adrfam": "IPv4", 00:17:07.636 "traddr": "192.168.100.8", 00:17:07.636 "trsvcid": "4420" 00:17:07.636 }, 00:17:07.636 "peer_address": { 00:17:07.636 "trtype": "RDMA", 00:17:07.636 "adrfam": "IPv4", 00:17:07.636 "traddr": "192.168.100.8", 00:17:07.636 "trsvcid": "47915" 00:17:07.636 }, 00:17:07.636 "auth": { 00:17:07.636 "state": "completed", 00:17:07.636 "digest": "sha512", 00:17:07.636 "dhgroup": "ffdhe2048" 00:17:07.636 } 00:17:07.636 } 00:17:07.636 ]' 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.636 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.917 13:47:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.852 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.112 00:17:09.112 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.112 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.112 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.371 { 00:17:09.371 "cntlid": 109, 00:17:09.371 "qid": 0, 00:17:09.371 "state": "enabled", 00:17:09.371 "thread": "nvmf_tgt_poll_group_000", 00:17:09.371 "listen_address": { 00:17:09.371 "trtype": "RDMA", 00:17:09.371 "adrfam": "IPv4", 00:17:09.371 "traddr": "192.168.100.8", 00:17:09.371 "trsvcid": "4420" 00:17:09.371 }, 00:17:09.371 "peer_address": { 00:17:09.371 "trtype": "RDMA", 00:17:09.371 "adrfam": "IPv4", 00:17:09.371 "traddr": "192.168.100.8", 00:17:09.371 "trsvcid": "60682" 00:17:09.371 }, 00:17:09.371 "auth": { 00:17:09.371 "state": "completed", 00:17:09.371 "digest": "sha512", 00:17:09.371 "dhgroup": "ffdhe2048" 00:17:09.371 } 00:17:09.371 } 00:17:09.371 ]' 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.371 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.629 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.629 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.629 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.629 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.629 13:47:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.629 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:10.563 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.564 13:47:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.564 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.821 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.821 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.821 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.821 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.079 { 00:17:11.079 "cntlid": 111, 00:17:11.079 "qid": 0, 00:17:11.079 "state": "enabled", 00:17:11.079 "thread": "nvmf_tgt_poll_group_000", 00:17:11.079 "listen_address": { 00:17:11.079 "trtype": "RDMA", 00:17:11.079 "adrfam": "IPv4", 00:17:11.079 "traddr": "192.168.100.8", 00:17:11.079 "trsvcid": "4420" 00:17:11.079 }, 00:17:11.079 "peer_address": { 00:17:11.079 "trtype": "RDMA", 00:17:11.079 "adrfam": "IPv4", 00:17:11.079 "traddr": "192.168.100.8", 00:17:11.079 "trsvcid": "35367" 00:17:11.079 }, 00:17:11.079 "auth": { 00:17:11.079 "state": "completed", 00:17:11.079 "digest": "sha512", 00:17:11.079 "dhgroup": "ffdhe2048" 00:17:11.079 } 00:17:11.079 } 00:17:11.079 ]' 00:17:11.079 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.338 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.596 13:47:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.163 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.422 13:47:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.681 00:17:12.681 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.681 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.681 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.939 { 00:17:12.939 "cntlid": 113, 00:17:12.939 "qid": 0, 00:17:12.939 "state": "enabled", 00:17:12.939 "thread": "nvmf_tgt_poll_group_000", 00:17:12.939 "listen_address": { 00:17:12.939 "trtype": "RDMA", 00:17:12.939 "adrfam": "IPv4", 00:17:12.939 "traddr": "192.168.100.8", 00:17:12.939 "trsvcid": "4420" 00:17:12.939 }, 00:17:12.939 "peer_address": { 00:17:12.939 "trtype": "RDMA", 00:17:12.939 "adrfam": "IPv4", 00:17:12.939 "traddr": "192.168.100.8", 00:17:12.939 "trsvcid": "36594" 00:17:12.939 }, 00:17:12.939 "auth": { 00:17:12.939 "state": "completed", 00:17:12.939 "digest": "sha512", 00:17:12.939 "dhgroup": "ffdhe3072" 00:17:12.939 } 00:17:12.939 } 00:17:12.939 ]' 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.939 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.198 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.198 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.198 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.198 13:47:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:13.767 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.025 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.284 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.543 00:17:14.543 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.543 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.543 13:47:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.543 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.543 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.543 13:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.543 13:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.802 { 00:17:14.802 "cntlid": 115, 00:17:14.802 "qid": 0, 00:17:14.802 "state": "enabled", 00:17:14.802 "thread": "nvmf_tgt_poll_group_000", 00:17:14.802 "listen_address": { 00:17:14.802 "trtype": "RDMA", 00:17:14.802 "adrfam": "IPv4", 00:17:14.802 "traddr": "192.168.100.8", 00:17:14.802 "trsvcid": "4420" 00:17:14.802 }, 00:17:14.802 "peer_address": { 00:17:14.802 "trtype": "RDMA", 00:17:14.802 "adrfam": "IPv4", 00:17:14.802 "traddr": "192.168.100.8", 00:17:14.802 "trsvcid": "44587" 00:17:14.802 }, 00:17:14.802 "auth": { 00:17:14.802 "state": "completed", 00:17:14.802 "digest": "sha512", 00:17:14.802 "dhgroup": "ffdhe3072" 00:17:14.802 } 00:17:14.802 } 00:17:14.802 ]' 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.802 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.060 13:47:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.628 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.887 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.887 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.146 00:17:16.146 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.146 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.146 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.405 { 00:17:16.405 "cntlid": 117, 00:17:16.405 "qid": 0, 00:17:16.405 "state": "enabled", 00:17:16.405 "thread": "nvmf_tgt_poll_group_000", 00:17:16.405 "listen_address": { 00:17:16.405 "trtype": "RDMA", 00:17:16.405 "adrfam": "IPv4", 00:17:16.405 "traddr": "192.168.100.8", 00:17:16.405 "trsvcid": "4420" 00:17:16.405 }, 00:17:16.405 "peer_address": { 00:17:16.405 "trtype": "RDMA", 00:17:16.405 "adrfam": "IPv4", 00:17:16.405 "traddr": "192.168.100.8", 00:17:16.405 "trsvcid": "43390" 00:17:16.405 }, 00:17:16.405 "auth": { 00:17:16.405 "state": "completed", 00:17:16.405 "digest": "sha512", 00:17:16.405 "dhgroup": "ffdhe3072" 00:17:16.405 } 00:17:16.405 } 00:17:16.405 ]' 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.405 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.664 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.664 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.664 13:47:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.664 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:17.231 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.490 13:47:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.770 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.057 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.057 { 00:17:18.057 "cntlid": 119, 00:17:18.057 "qid": 0, 00:17:18.057 "state": "enabled", 00:17:18.057 "thread": "nvmf_tgt_poll_group_000", 00:17:18.057 "listen_address": { 00:17:18.057 "trtype": "RDMA", 00:17:18.057 "adrfam": "IPv4", 00:17:18.057 "traddr": "192.168.100.8", 00:17:18.057 "trsvcid": "4420" 00:17:18.057 }, 00:17:18.057 "peer_address": { 00:17:18.057 "trtype": "RDMA", 00:17:18.057 "adrfam": "IPv4", 00:17:18.057 "traddr": "192.168.100.8", 00:17:18.057 "trsvcid": "52201" 00:17:18.057 }, 00:17:18.057 "auth": { 00:17:18.057 "state": "completed", 00:17:18.057 "digest": "sha512", 00:17:18.057 "dhgroup": "ffdhe3072" 00:17:18.057 } 00:17:18.057 } 00:17:18.057 ]' 00:17:18.057 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.343 13:47:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.280 13:47:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.539 00:17:19.539 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.539 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.539 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.797 { 00:17:19.797 "cntlid": 121, 00:17:19.797 "qid": 0, 00:17:19.797 "state": "enabled", 00:17:19.797 "thread": "nvmf_tgt_poll_group_000", 00:17:19.797 "listen_address": { 00:17:19.797 "trtype": "RDMA", 00:17:19.797 "adrfam": "IPv4", 00:17:19.797 "traddr": "192.168.100.8", 00:17:19.797 "trsvcid": "4420" 00:17:19.797 }, 00:17:19.797 "peer_address": { 00:17:19.797 "trtype": "RDMA", 00:17:19.797 "adrfam": "IPv4", 00:17:19.797 "traddr": "192.168.100.8", 00:17:19.797 "trsvcid": "44810" 00:17:19.797 }, 00:17:19.797 "auth": { 00:17:19.797 "state": "completed", 00:17:19.797 "digest": "sha512", 00:17:19.797 "dhgroup": "ffdhe4096" 00:17:19.797 } 00:17:19.797 } 00:17:19.797 ]' 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.797 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.055 13:47:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:20.990 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.991 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.249 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.506 13:47:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.506 13:47:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.506 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.506 { 00:17:21.506 "cntlid": 123, 00:17:21.506 "qid": 0, 00:17:21.506 "state": "enabled", 00:17:21.506 "thread": "nvmf_tgt_poll_group_000", 00:17:21.506 "listen_address": { 00:17:21.506 "trtype": "RDMA", 00:17:21.506 "adrfam": "IPv4", 00:17:21.506 "traddr": "192.168.100.8", 00:17:21.506 "trsvcid": "4420" 00:17:21.506 }, 00:17:21.506 "peer_address": { 00:17:21.506 "trtype": "RDMA", 00:17:21.506 "adrfam": "IPv4", 00:17:21.506 "traddr": "192.168.100.8", 00:17:21.506 "trsvcid": "32814" 00:17:21.506 }, 00:17:21.506 "auth": { 00:17:21.506 "state": "completed", 00:17:21.506 "digest": "sha512", 00:17:21.506 "dhgroup": "ffdhe4096" 00:17:21.506 } 00:17:21.506 } 00:17:21.506 ]' 00:17:21.506 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.764 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.021 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:22.587 13:47:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.588 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.845 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.846 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.104 00:17:23.104 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.104 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.104 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.361 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.361 { 00:17:23.361 "cntlid": 125, 00:17:23.361 "qid": 0, 00:17:23.361 "state": "enabled", 00:17:23.361 "thread": "nvmf_tgt_poll_group_000", 00:17:23.361 "listen_address": { 00:17:23.361 "trtype": "RDMA", 00:17:23.361 "adrfam": "IPv4", 00:17:23.361 "traddr": "192.168.100.8", 00:17:23.361 "trsvcid": "4420" 00:17:23.361 }, 00:17:23.361 "peer_address": { 00:17:23.361 "trtype": "RDMA", 00:17:23.361 "adrfam": "IPv4", 00:17:23.361 "traddr": "192.168.100.8", 00:17:23.361 "trsvcid": "57226" 00:17:23.361 }, 00:17:23.362 "auth": { 00:17:23.362 "state": "completed", 00:17:23.362 "digest": "sha512", 00:17:23.362 "dhgroup": "ffdhe4096" 00:17:23.362 } 00:17:23.362 } 00:17:23.362 ]' 00:17:23.362 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.362 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.362 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.362 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.362 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.620 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.620 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.620 13:47:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.620 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:24.184 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.442 13:47:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.701 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.959 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.959 13:47:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.216 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.216 { 00:17:25.216 "cntlid": 127, 00:17:25.216 "qid": 0, 00:17:25.216 "state": "enabled", 00:17:25.217 "thread": "nvmf_tgt_poll_group_000", 00:17:25.217 "listen_address": { 00:17:25.217 "trtype": "RDMA", 00:17:25.217 "adrfam": "IPv4", 00:17:25.217 "traddr": "192.168.100.8", 00:17:25.217 "trsvcid": "4420" 00:17:25.217 }, 00:17:25.217 "peer_address": { 00:17:25.217 "trtype": "RDMA", 00:17:25.217 "adrfam": "IPv4", 00:17:25.217 "traddr": "192.168.100.8", 00:17:25.217 "trsvcid": "56873" 00:17:25.217 }, 00:17:25.217 "auth": { 00:17:25.217 "state": "completed", 00:17:25.217 "digest": "sha512", 00:17:25.217 "dhgroup": "ffdhe4096" 00:17:25.217 } 00:17:25.217 } 00:17:25.217 ]' 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.217 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.475 13:47:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.041 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.300 13:47:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.558 00:17:26.558 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.558 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.558 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.818 { 00:17:26.818 "cntlid": 129, 00:17:26.818 "qid": 0, 00:17:26.818 "state": "enabled", 00:17:26.818 "thread": "nvmf_tgt_poll_group_000", 00:17:26.818 "listen_address": { 00:17:26.818 "trtype": "RDMA", 00:17:26.818 "adrfam": "IPv4", 00:17:26.818 "traddr": "192.168.100.8", 00:17:26.818 "trsvcid": "4420" 00:17:26.818 }, 00:17:26.818 "peer_address": { 00:17:26.818 "trtype": "RDMA", 00:17:26.818 "adrfam": "IPv4", 00:17:26.818 "traddr": "192.168.100.8", 00:17:26.818 "trsvcid": "47286" 00:17:26.818 }, 00:17:26.818 "auth": { 00:17:26.818 "state": "completed", 00:17:26.818 "digest": "sha512", 00:17:26.818 "dhgroup": "ffdhe6144" 00:17:26.818 } 00:17:26.818 } 00:17:26.818 ]' 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.818 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.076 13:47:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.008 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.574 00:17:28.574 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.574 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.574 13:47:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.574 { 00:17:28.574 "cntlid": 131, 00:17:28.574 "qid": 0, 00:17:28.574 "state": "enabled", 00:17:28.574 "thread": "nvmf_tgt_poll_group_000", 00:17:28.574 "listen_address": { 00:17:28.574 "trtype": "RDMA", 00:17:28.574 "adrfam": "IPv4", 00:17:28.574 "traddr": "192.168.100.8", 00:17:28.574 "trsvcid": "4420" 00:17:28.574 }, 00:17:28.574 "peer_address": { 00:17:28.574 "trtype": "RDMA", 00:17:28.574 "adrfam": "IPv4", 00:17:28.574 "traddr": "192.168.100.8", 00:17:28.574 "trsvcid": "39197" 00:17:28.574 }, 00:17:28.574 "auth": { 00:17:28.574 "state": "completed", 00:17:28.574 "digest": "sha512", 00:17:28.574 "dhgroup": "ffdhe6144" 00:17:28.574 } 00:17:28.574 } 00:17:28.574 ]' 00:17:28.574 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.832 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.091 13:47:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.657 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.915 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.174 00:17:30.174 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.174 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.174 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.432 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.432 { 00:17:30.432 "cntlid": 133, 00:17:30.432 "qid": 0, 00:17:30.432 "state": "enabled", 00:17:30.432 "thread": "nvmf_tgt_poll_group_000", 00:17:30.432 "listen_address": { 00:17:30.432 "trtype": "RDMA", 00:17:30.432 "adrfam": "IPv4", 00:17:30.432 "traddr": "192.168.100.8", 00:17:30.432 "trsvcid": "4420" 00:17:30.432 }, 00:17:30.432 "peer_address": { 00:17:30.432 "trtype": "RDMA", 00:17:30.432 "adrfam": "IPv4", 00:17:30.432 "traddr": "192.168.100.8", 00:17:30.433 "trsvcid": "44657" 00:17:30.433 }, 00:17:30.433 "auth": { 00:17:30.433 "state": "completed", 00:17:30.433 "digest": "sha512", 00:17:30.433 "dhgroup": "ffdhe6144" 00:17:30.433 } 00:17:30.433 } 00:17:30.433 ]' 00:17:30.433 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.433 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.433 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.690 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.690 13:47:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.690 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.690 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.690 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.947 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:31.509 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.510 13:47:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.767 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.024 00:17:32.024 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.024 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.024 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.282 { 00:17:32.282 "cntlid": 135, 00:17:32.282 "qid": 0, 00:17:32.282 "state": "enabled", 00:17:32.282 "thread": "nvmf_tgt_poll_group_000", 00:17:32.282 "listen_address": { 00:17:32.282 "trtype": "RDMA", 00:17:32.282 "adrfam": "IPv4", 00:17:32.282 "traddr": "192.168.100.8", 00:17:32.282 "trsvcid": "4420" 00:17:32.282 }, 00:17:32.282 "peer_address": { 00:17:32.282 "trtype": "RDMA", 00:17:32.282 "adrfam": "IPv4", 00:17:32.282 "traddr": "192.168.100.8", 00:17:32.282 "trsvcid": "52449" 00:17:32.282 }, 00:17:32.282 "auth": { 00:17:32.282 "state": "completed", 00:17:32.282 "digest": "sha512", 00:17:32.282 "dhgroup": "ffdhe6144" 00:17:32.282 } 00:17:32.282 } 00:17:32.282 ]' 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.282 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.540 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.540 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.540 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.540 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.540 13:47:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.540 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.476 13:47:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.042 00:17:34.042 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.042 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.042 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.300 { 00:17:34.300 "cntlid": 137, 00:17:34.300 "qid": 0, 00:17:34.300 "state": "enabled", 00:17:34.300 "thread": "nvmf_tgt_poll_group_000", 00:17:34.300 "listen_address": { 00:17:34.300 "trtype": "RDMA", 00:17:34.300 "adrfam": "IPv4", 00:17:34.300 "traddr": "192.168.100.8", 00:17:34.300 "trsvcid": "4420" 00:17:34.300 }, 00:17:34.300 "peer_address": { 00:17:34.300 "trtype": "RDMA", 00:17:34.300 "adrfam": "IPv4", 00:17:34.300 "traddr": "192.168.100.8", 00:17:34.300 "trsvcid": "47389" 00:17:34.300 }, 00:17:34.300 "auth": { 00:17:34.300 "state": "completed", 00:17:34.300 "digest": "sha512", 00:17:34.300 "dhgroup": "ffdhe8192" 00:17:34.300 } 00:17:34.300 } 00:17:34.300 ]' 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.300 13:48:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.558 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:35.124 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.382 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.383 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.640 13:48:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.898 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.157 { 00:17:36.157 "cntlid": 139, 00:17:36.157 "qid": 0, 00:17:36.157 "state": "enabled", 00:17:36.157 "thread": "nvmf_tgt_poll_group_000", 00:17:36.157 "listen_address": { 00:17:36.157 "trtype": "RDMA", 00:17:36.157 "adrfam": "IPv4", 00:17:36.157 "traddr": "192.168.100.8", 00:17:36.157 "trsvcid": "4420" 00:17:36.157 }, 00:17:36.157 "peer_address": { 00:17:36.157 "trtype": "RDMA", 00:17:36.157 "adrfam": "IPv4", 00:17:36.157 "traddr": "192.168.100.8", 00:17:36.157 "trsvcid": "47990" 00:17:36.157 }, 00:17:36.157 "auth": { 00:17:36.157 "state": "completed", 00:17:36.157 "digest": "sha512", 00:17:36.157 "dhgroup": "ffdhe8192" 00:17:36.157 } 00:17:36.157 } 00:17:36.157 ]' 00:17:36.157 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.415 13:48:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NzdiZmRkNDkxOTliOTk5NDc1MGJhMzRmYzY0MGJkZGHEYTU3: --dhchap-ctrl-secret DHHC-1:02:ZmRkNzUxODQyMzRiMGY1MzVhZDgwZTdiNzhjNzFiZGI3MTY0ZGM3OTYyZThhMzgzIs2PEg==: 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.347 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.348 13:48:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.962 00:17:37.962 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.962 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.962 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.220 { 00:17:38.220 "cntlid": 141, 00:17:38.220 "qid": 0, 00:17:38.220 "state": "enabled", 00:17:38.220 "thread": "nvmf_tgt_poll_group_000", 00:17:38.220 "listen_address": { 00:17:38.220 "trtype": "RDMA", 00:17:38.220 "adrfam": "IPv4", 00:17:38.220 "traddr": "192.168.100.8", 00:17:38.220 "trsvcid": "4420" 00:17:38.220 }, 00:17:38.220 "peer_address": { 00:17:38.220 "trtype": "RDMA", 00:17:38.220 "adrfam": "IPv4", 00:17:38.220 "traddr": "192.168.100.8", 00:17:38.220 "trsvcid": "41448" 00:17:38.220 }, 00:17:38.220 "auth": { 00:17:38.220 "state": "completed", 00:17:38.220 "digest": "sha512", 00:17:38.220 "dhgroup": "ffdhe8192" 00:17:38.220 } 00:17:38.220 } 00:17:38.220 ]' 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.220 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.477 13:48:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTNhOWU0Nzc0Mjg1YjgyY2JiMDMyYWI0OWU3YmY2NjkxNDJhZjA4OWY5ZjY2ODU39H4Wag==: --dhchap-ctrl-secret DHHC-1:01:NjMzZDc5Nzk4MTkyM2RjNDFhYWM4M2UyNmE2MjRiZjnkBMUV: 00:17:39.043 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.300 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.558 13:48:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.558 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.558 13:48:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.817 00:17:39.817 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.817 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.817 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.110 { 00:17:40.110 "cntlid": 143, 00:17:40.110 "qid": 0, 00:17:40.110 "state": "enabled", 00:17:40.110 "thread": "nvmf_tgt_poll_group_000", 00:17:40.110 "listen_address": { 00:17:40.110 "trtype": "RDMA", 00:17:40.110 "adrfam": "IPv4", 00:17:40.110 "traddr": "192.168.100.8", 00:17:40.110 "trsvcid": "4420" 00:17:40.110 }, 00:17:40.110 "peer_address": { 00:17:40.110 "trtype": "RDMA", 00:17:40.110 "adrfam": "IPv4", 00:17:40.110 "traddr": "192.168.100.8", 00:17:40.110 "trsvcid": "53680" 00:17:40.110 }, 00:17:40.110 "auth": { 00:17:40.110 "state": "completed", 00:17:40.110 "digest": "sha512", 00:17:40.110 "dhgroup": "ffdhe8192" 00:17:40.110 } 00:17:40.110 } 00:17:40.110 ]' 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.110 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.393 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.393 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.393 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.393 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.393 13:48:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:17:40.959 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:41.217 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:41.218 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:41.218 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.218 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.218 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.476 13:48:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.043 00:17:42.043 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.043 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.044 { 00:17:42.044 "cntlid": 145, 00:17:42.044 "qid": 0, 00:17:42.044 "state": "enabled", 00:17:42.044 "thread": "nvmf_tgt_poll_group_000", 00:17:42.044 "listen_address": { 00:17:42.044 "trtype": "RDMA", 00:17:42.044 "adrfam": "IPv4", 00:17:42.044 "traddr": "192.168.100.8", 00:17:42.044 "trsvcid": "4420" 00:17:42.044 }, 00:17:42.044 "peer_address": { 00:17:42.044 "trtype": "RDMA", 00:17:42.044 "adrfam": "IPv4", 00:17:42.044 "traddr": "192.168.100.8", 00:17:42.044 "trsvcid": "42956" 00:17:42.044 }, 00:17:42.044 "auth": { 00:17:42.044 "state": "completed", 00:17:42.044 "digest": "sha512", 00:17:42.044 "dhgroup": "ffdhe8192" 00:17:42.044 } 00:17:42.044 } 00:17:42.044 ]' 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.044 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.303 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.303 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.303 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.303 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.303 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.561 13:48:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2ZlOWI4ZjI4MDQ3MjY5MzEyNGJhYWVhYTZiZDA4OTYxNGExYzhiYmI5MDY1NDM0PVSwyQ==: --dhchap-ctrl-secret DHHC-1:03:ZmMxZGUzNjAzOTYyMzRkZjc3NDhhZDc3NzkwOGEwYWEwOGE0ODYzZjNjZGVmMTVmOTViZDM1YmRlZTE0YzhiOQUrRVw=: 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:43.128 13:48:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.208 request: 00:18:15.208 { 00:18:15.208 "name": "nvme0", 00:18:15.208 "trtype": "rdma", 00:18:15.208 "traddr": "192.168.100.8", 00:18:15.208 "adrfam": "ipv4", 00:18:15.208 "trsvcid": "4420", 00:18:15.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:18:15.208 "prchk_reftag": false, 00:18:15.208 "prchk_guard": false, 00:18:15.208 "hdgst": false, 00:18:15.208 "ddgst": false, 00:18:15.208 "dhchap_key": "key2", 00:18:15.208 "method": "bdev_nvme_attach_controller", 00:18:15.208 "req_id": 1 00:18:15.208 } 00:18:15.208 Got JSON-RPC error response 00:18:15.208 response: 00:18:15.208 { 00:18:15.208 "code": -5, 00:18:15.208 "message": "Input/output error" 00:18:15.208 } 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:15.208 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:15.208 request: 00:18:15.208 { 00:18:15.208 "name": "nvme0", 00:18:15.208 "trtype": "rdma", 00:18:15.208 "traddr": "192.168.100.8", 00:18:15.208 "adrfam": "ipv4", 00:18:15.208 "trsvcid": "4420", 00:18:15.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:18:15.208 "prchk_reftag": false, 00:18:15.208 "prchk_guard": false, 00:18:15.208 "hdgst": false, 00:18:15.209 "ddgst": false, 00:18:15.209 "dhchap_key": "key1", 00:18:15.209 "dhchap_ctrlr_key": "ckey2", 00:18:15.209 "method": "bdev_nvme_attach_controller", 00:18:15.209 "req_id": 1 00:18:15.209 } 00:18:15.209 Got JSON-RPC error response 00:18:15.209 response: 00:18:15.209 { 00:18:15.209 "code": -5, 00:18:15.209 "message": "Input/output error" 00:18:15.209 } 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key1 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.209 13:48:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.286 request: 00:18:47.286 { 00:18:47.286 "name": "nvme0", 00:18:47.286 "trtype": "rdma", 00:18:47.286 "traddr": "192.168.100.8", 00:18:47.286 "adrfam": "ipv4", 00:18:47.286 "trsvcid": "4420", 00:18:47.286 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:47.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:18:47.286 "prchk_reftag": false, 00:18:47.286 "prchk_guard": false, 00:18:47.286 "hdgst": false, 00:18:47.286 "ddgst": false, 00:18:47.286 "dhchap_key": "key1", 00:18:47.286 "dhchap_ctrlr_key": "ckey1", 00:18:47.286 "method": "bdev_nvme_attach_controller", 00:18:47.286 "req_id": 1 00:18:47.286 } 00:18:47.286 Got JSON-RPC error response 00:18:47.286 response: 00:18:47.286 { 00:18:47.286 "code": -5, 00:18:47.286 "message": "Input/output error" 00:18:47.286 } 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.286 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2485752 ']' 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485752' 00:18:47.287 killing process with pid 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2485752 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2512969 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2512969 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2512969 ']' 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.287 13:49:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2512969 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2512969 ']' 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.287 13:49:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.287 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.287 { 00:18:47.287 "cntlid": 1, 00:18:47.287 "qid": 0, 00:18:47.287 "state": "enabled", 00:18:47.287 "thread": "nvmf_tgt_poll_group_000", 00:18:47.287 "listen_address": { 00:18:47.287 "trtype": "RDMA", 00:18:47.287 "adrfam": "IPv4", 00:18:47.287 "traddr": "192.168.100.8", 00:18:47.287 "trsvcid": "4420" 00:18:47.287 }, 00:18:47.287 "peer_address": { 00:18:47.287 "trtype": "RDMA", 00:18:47.287 "adrfam": "IPv4", 00:18:47.287 "traddr": "192.168.100.8", 00:18:47.287 "trsvcid": "39176" 00:18:47.287 }, 00:18:47.287 "auth": { 00:18:47.287 "state": "completed", 00:18:47.287 "digest": "sha512", 00:18:47.287 "dhgroup": "ffdhe8192" 00:18:47.287 } 00:18:47.287 } 00:18:47.287 ]' 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.287 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.547 13:49:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid 809f3706-e051-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:YjI2MjI4ZjkzM2Q0ODc5Zjk2MmQ3NDVjZDUwMTcyZmQ1M2Q2M2Q1YmY3MzNmMmUwNjU4MjE4NDc4YzVjYTE0ZuVeRh0=: 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --dhchap-key key3 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:48.114 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.374 13:49:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.561 request: 00:19:20.561 { 00:19:20.561 "name": "nvme0", 00:19:20.561 "trtype": "rdma", 00:19:20.561 "traddr": "192.168.100.8", 00:19:20.561 "adrfam": "ipv4", 00:19:20.561 "trsvcid": "4420", 00:19:20.561 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:19:20.561 "prchk_reftag": false, 00:19:20.561 "prchk_guard": false, 00:19:20.561 "hdgst": false, 00:19:20.561 "ddgst": false, 00:19:20.561 "dhchap_key": "key3", 00:19:20.561 "method": "bdev_nvme_attach_controller", 00:19:20.561 "req_id": 1 00:19:20.561 } 00:19:20.561 Got JSON-RPC error response 00:19:20.561 response: 00:19:20.561 { 00:19:20.561 "code": -5, 00:19:20.561 "message": "Input/output error" 00:19:20.561 } 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.561 13:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.637 request: 00:19:52.637 { 00:19:52.637 "name": "nvme0", 00:19:52.637 "trtype": "rdma", 00:19:52.637 "traddr": "192.168.100.8", 00:19:52.637 "adrfam": "ipv4", 00:19:52.637 "trsvcid": "4420", 00:19:52.637 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:19:52.637 "prchk_reftag": false, 00:19:52.637 "prchk_guard": false, 00:19:52.637 "hdgst": false, 00:19:52.637 "ddgst": false, 00:19:52.637 "dhchap_key": "key3", 00:19:52.637 "method": "bdev_nvme_attach_controller", 00:19:52.637 "req_id": 1 00:19:52.637 } 00:19:52.637 Got JSON-RPC error response 00:19:52.637 response: 00:19:52.637 { 00:19:52.637 "code": -5, 00:19:52.637 "message": "Input/output error" 00:19:52.637 } 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.637 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.637 request: 00:19:52.637 { 00:19:52.637 "name": "nvme0", 00:19:52.637 "trtype": "rdma", 00:19:52.637 "traddr": "192.168.100.8", 00:19:52.637 "adrfam": "ipv4", 00:19:52.637 "trsvcid": "4420", 00:19:52.637 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562", 00:19:52.637 "prchk_reftag": false, 00:19:52.637 "prchk_guard": false, 00:19:52.637 "hdgst": false, 00:19:52.637 "ddgst": false, 00:19:52.637 "dhchap_key": "key0", 00:19:52.637 "dhchap_ctrlr_key": "key1", 00:19:52.637 "method": "bdev_nvme_attach_controller", 00:19:52.637 "req_id": 1 00:19:52.637 } 00:19:52.637 Got JSON-RPC error response 00:19:52.637 response: 00:19:52.637 { 00:19:52.637 "code": -5, 00:19:52.637 "message": "Input/output error" 00:19:52.637 } 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.637 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2485846 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2485846 ']' 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2485846 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2485846 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2485846' 00:19:52.637 killing process with pid 2485846 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2485846 00:19:52.637 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2485846 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:52.637 rmmod nvme_rdma 00:19:52.637 rmmod nvme_fabrics 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2512969 ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2512969 ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2512969' 00:19:52.637 killing process with pid 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2512969 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.K7X /tmp/spdk.key-sha256.Hl7 /tmp/spdk.key-sha384.lVx /tmp/spdk.key-sha512.PAK /tmp/spdk.key-sha512.8mT /tmp/spdk.key-sha384.yGT /tmp/spdk.key-sha256.tjJ '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:19:52.637 00:19:52.637 real 4m28.458s 00:19:52.637 user 9m37.320s 00:19:52.637 sys 0m24.348s 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.637 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 ************************************ 00:19:52.637 END TEST nvmf_auth_target 00:19:52.637 ************************************ 00:19:52.637 13:50:17 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:19:52.638 13:50:17 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:52.638 13:50:17 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:52.638 13:50:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.638 13:50:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 ************************************ 00:19:52.638 START TEST nvmf_srq_overwhelm 00:19:52.638 ************************************ 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:52.638 * Looking for test storage... 00:19:52.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.638 13:50:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:57.927 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:57.927 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:57.927 Found net devices under 0000:18:00.0: mlx_0_0 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:57.927 Found net devices under 0000:18:00.1: mlx_0_1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:57.927 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.927 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:19:57.927 altname enp24s0f0np0 00:19:57.927 altname ens785f0np0 00:19:57.927 inet 192.168.100.8/24 scope global mlx_0_0 00:19:57.927 valid_lft forever preferred_lft forever 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:57.927 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:57.928 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.928 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:19:57.928 altname enp24s0f1np1 00:19:57.928 altname ens785f1np1 00:19:57.928 inet 192.168.100.9/24 scope global mlx_0_1 00:19:57.928 valid_lft forever preferred_lft forever 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:57.928 192.168.100.9' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:57.928 192.168.100.9' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:57.928 192.168.100.9' 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:19:57.928 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2524400 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2524400 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 2524400 ']' 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.187 13:50:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:58.187 [2024-07-15 13:50:24.541386] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:19:58.187 [2024-07-15 13:50:24.541449] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.187 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.187 [2024-07-15 13:50:24.626830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.446 [2024-07-15 13:50:24.717681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.446 [2024-07-15 13:50:24.717722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.446 [2024-07-15 13:50:24.717732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.446 [2024-07-15 13:50:24.717741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.446 [2024-07-15 13:50:24.717748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.446 [2024-07-15 13:50:24.717815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.446 [2024-07-15 13:50:24.717909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.446 [2024-07-15 13:50:24.718012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.446 [2024-07-15 13:50:24.718013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.012 [2024-07-15 13:50:25.435995] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xaf9180/0xafd670) succeed. 00:19:59.012 [2024-07-15 13:50:25.445534] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xafa7c0/0xb3ed00) succeed. 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.012 Malloc0 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.012 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:59.270 [2024-07-15 13:50:25.548868] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.270 13:50:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:00.204 Malloc1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.204 13:50:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:01.139 Malloc2 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.139 13:50:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:02.515 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:02.516 Malloc3 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.516 13:50:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:03.505 Malloc4 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.505 13:50:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:04.440 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:20:04.440 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:04.441 Malloc5 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.441 13:50:30 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:05.375 13:50:31 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:20:05.375 [global] 00:20:05.375 thread=1 00:20:05.375 invalidate=1 00:20:05.375 rw=read 00:20:05.375 time_based=1 00:20:05.375 runtime=10 00:20:05.375 ioengine=libaio 00:20:05.375 direct=1 00:20:05.375 bs=1048576 00:20:05.375 iodepth=128 00:20:05.375 norandommap=1 00:20:05.375 numjobs=13 00:20:05.375 00:20:05.375 [job0] 00:20:05.375 filename=/dev/nvme0n1 00:20:05.375 [job1] 00:20:05.375 filename=/dev/nvme1n1 00:20:05.375 [job2] 00:20:05.375 filename=/dev/nvme2n1 00:20:05.375 [job3] 00:20:05.375 filename=/dev/nvme3n1 00:20:05.375 [job4] 00:20:05.375 filename=/dev/nvme4n1 00:20:05.375 [job5] 00:20:05.375 filename=/dev/nvme5n1 00:20:05.644 Could not set queue depth (nvme0n1) 00:20:05.644 Could not set queue depth (nvme1n1) 00:20:05.644 Could not set queue depth (nvme2n1) 00:20:05.644 Could not set queue depth (nvme3n1) 00:20:05.644 Could not set queue depth (nvme4n1) 00:20:05.644 Could not set queue depth (nvme5n1) 00:20:05.902 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:05.902 ... 00:20:05.902 fio-3.35 00:20:05.902 Starting 78 threads 00:20:20.779 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525722: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=3, BW=3306KiB/s (3386kB/s)(41.0MiB/12698msec) 00:20:20.779 slat (usec): min=683, max=2098.9k, avg=258247.25, stdev=669903.60 00:20:20.779 clat (msec): min=2109, max=12696, avg=9023.46, stdev=3498.86 00:20:20.779 lat (msec): min=4166, max=12697, avg=9281.71, stdev=3363.90 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:20.779 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:20:20.779 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.779 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.779 | 99.99th=[12684] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.26%, ctx=69, majf=0, minf=10497 00:20:20.779 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.779 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525723: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=97, BW=97.3MiB/s (102MB/s)(1037MiB/10655msec) 00:20:20.779 slat (usec): min=43, max=2024.8k, avg=10212.70, stdev=97505.33 00:20:20.779 clat (msec): min=59, max=4594, avg=1250.37, stdev=1277.15 00:20:20.779 lat (msec): min=235, max=4594, avg=1260.58, stdev=1280.11 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 251], 5.00th=[ 284], 10.00th=[ 321], 20.00th=[ 397], 00:20:20.779 | 30.00th=[ 575], 40.00th=[ 651], 50.00th=[ 667], 60.00th=[ 718], 00:20:20.779 | 70.00th=[ 793], 80.00th=[ 2333], 90.00th=[ 4044], 95.00th=[ 4463], 00:20:20.779 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:20:20.779 | 99.99th=[ 4597] 00:20:20.779 bw ( KiB/s): min= 6144, max=407552, per=4.71%, avg=143173.15, stdev=114719.03, samples=13 00:20:20.779 iops : min= 6, max= 398, avg=139.77, stdev=112.01, samples=13 00:20:20.779 lat (msec) : 100=0.10%, 250=0.77%, 500=26.71%, 750=36.26%, 1000=9.84% 00:20:20.779 lat (msec) : >=2000=26.33% 00:20:20.779 cpu : usr=0.07%, sys=1.48%, ctx=998, majf=0, minf=32769 00:20:20.779 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.779 issued rwts: total=1037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525724: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=1, BW=1619KiB/s (1658kB/s)(20.0MiB/12647msec) 00:20:20.779 slat (usec): min=813, max=2111.1k, avg=527303.04, stdev=907046.81 00:20:20.779 clat (msec): min=2100, max=12593, avg=8557.75, stdev=3147.20 00:20:20.779 lat (msec): min=4191, max=12646, avg=9085.06, stdev=2880.58 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 6342], 00:20:20.779 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[ 8557], 00:20:20.779 | 70.00th=[10671], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:20:20.779 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:20:20.779 | 99.99th=[12550] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.11%, ctx=48, majf=0, minf=5121 00:20:20.779 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.779 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525725: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=4, BW=4800KiB/s (4915kB/s)(60.0MiB/12801msec) 00:20:20.779 slat (usec): min=1005, max=2123.1k, avg=178349.41, stdev=576392.66 00:20:20.779 clat (msec): min=2099, max=12797, avg=11742.75, stdev=2431.81 00:20:20.779 lat (msec): min=4182, max=12800, avg=11921.10, stdev=2079.48 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 8490], 20.00th=[12550], 00:20:20.779 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:20:20.779 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:20:20.779 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:20:20.779 | 99.99th=[12818] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.50%, ctx=95, majf=0, minf=15361 00:20:20.779 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.779 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525726: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(334MiB/12702msec) 00:20:20.779 slat (usec): min=458, max=2134.7k, avg=31757.16, stdev=234022.50 00:20:20.779 clat (msec): min=245, max=12608, avg=4706.61, stdev=5412.79 00:20:20.779 lat (msec): min=247, max=12618, avg=4738.37, stdev=5424.39 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 247], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 284], 00:20:20.779 | 30.00th=[ 342], 40.00th=[ 409], 50.00th=[ 472], 60.00th=[ 3540], 00:20:20.779 | 70.00th=[11610], 80.00th=[11745], 90.00th=[11879], 95.00th=[12013], 00:20:20.779 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:20:20.779 | 99.99th=[12550] 00:20:20.779 bw ( KiB/s): min= 1532, max=294912, per=1.55%, avg=47063.44, stdev=96079.40, samples=9 00:20:20.779 iops : min= 1, max= 288, avg=45.89, stdev=93.85, samples=9 00:20:20.779 lat (msec) : 250=10.48%, 500=47.31%, 750=0.90%, 2000=0.30%, >=2000=41.02% 00:20:20.779 cpu : usr=0.02%, sys=0.61%, ctx=669, majf=0, minf=32769 00:20:20.779 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:20:20.779 issued rwts: total=334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525727: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=2, BW=2572KiB/s (2633kB/s)(32.0MiB/12742msec) 00:20:20.779 slat (usec): min=853, max=2123.1k, avg=332874.93, stdev=760638.04 00:20:20.779 clat (msec): min=2089, max=12738, avg=10458.41, stdev=2832.48 00:20:20.779 lat (msec): min=4205, max=12741, avg=10791.28, stdev=2411.97 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:20:20.779 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[12550], 00:20:20.779 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.779 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.779 | 99.99th=[12684] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.24%, ctx=66, majf=0, minf=8193 00:20:20.779 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.779 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525728: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=6, BW=6732KiB/s (6893kB/s)(70.0MiB/10648msec) 00:20:20.779 slat (usec): min=635, max=2095.7k, avg=151156.63, stdev=520096.72 00:20:20.779 clat (msec): min=65, max=10644, avg=6063.52, stdev=3204.44 00:20:20.779 lat (msec): min=2091, max=10646, avg=6214.68, stdev=3166.76 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 66], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2198], 00:20:20.779 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 6477], 00:20:20.779 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:20:20.779 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.779 | 99.99th=[10671] 00:20:20.779 lat (msec) : 100=1.43%, >=2000=98.57% 00:20:20.779 cpu : usr=0.00%, sys=0.60%, ctx=64, majf=0, minf=17921 00:20:20.779 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.779 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525729: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=18, BW=19.0MiB/s (19.9MB/s)(204MiB/10756msec) 00:20:20.779 slat (usec): min=483, max=2036.4k, avg=52438.52, stdev=268112.30 00:20:20.779 clat (msec): min=57, max=6973, avg=5030.06, stdev=1479.15 00:20:20.779 lat (msec): min=2093, max=6990, avg=5082.50, stdev=1440.61 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2089], 5.00th=[ 3037], 10.00th=[ 3507], 20.00th=[ 3675], 00:20:20.779 | 30.00th=[ 3910], 40.00th=[ 4178], 50.00th=[ 4933], 60.00th=[ 6074], 00:20:20.779 | 70.00th=[ 6409], 80.00th=[ 6678], 90.00th=[ 6879], 95.00th=[ 6879], 00:20:20.779 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:20:20.779 | 99.99th=[ 6946] 00:20:20.779 bw ( KiB/s): min=14307, max=71680, per=1.71%, avg=51873.00, stdev=32549.22, samples=3 00:20:20.779 iops : min= 13, max= 70, avg=50.33, stdev=32.35, samples=3 00:20:20.779 lat (msec) : 100=0.49%, >=2000=99.51% 00:20:20.779 cpu : usr=0.00%, sys=1.14%, ctx=372, majf=0, minf=32769 00:20:20.779 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:20:20.779 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525730: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=2, BW=2098KiB/s (2148kB/s)(26.0MiB/12690msec) 00:20:20.779 slat (usec): min=810, max=2126.2k, avg=406677.22, stdev=817527.27 00:20:20.779 clat (msec): min=2115, max=12671, avg=9172.42, stdev=3745.18 00:20:20.779 lat (msec): min=4188, max=12689, avg=9579.10, stdev=3515.28 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:20:20.779 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12550], 00:20:20.779 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.779 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.779 | 99.99th=[12684] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.15%, ctx=60, majf=0, minf=6657 00:20:20.779 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.779 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525731: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=86, BW=86.0MiB/s (90.2MB/s)(864MiB/10044msec) 00:20:20.779 slat (usec): min=48, max=1863.2k, avg=11572.97, stdev=87141.83 00:20:20.779 clat (msec): min=40, max=7097, avg=1020.68, stdev=976.01 00:20:20.779 lat (msec): min=43, max=7109, avg=1032.25, stdev=990.93 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 63], 5.00th=[ 279], 10.00th=[ 439], 20.00th=[ 510], 00:20:20.779 | 30.00th=[ 575], 40.00th=[ 709], 50.00th=[ 793], 60.00th=[ 860], 00:20:20.779 | 70.00th=[ 986], 80.00th=[ 1217], 90.00th=[ 1385], 95.00th=[ 3272], 00:20:20.779 | 99.00th=[ 5269], 99.50th=[ 5537], 99.90th=[ 7080], 99.95th=[ 7080], 00:20:20.779 | 99.99th=[ 7080] 00:20:20.779 bw ( KiB/s): min=65536, max=274432, per=4.96%, avg=150852.00, stdev=63143.35, samples=10 00:20:20.779 iops : min= 64, max= 268, avg=147.30, stdev=61.67, samples=10 00:20:20.779 lat (msec) : 50=0.35%, 100=1.50%, 250=2.43%, 500=14.58%, 750=24.07% 00:20:20.779 lat (msec) : 1000=28.82%, 2000=20.95%, >=2000=7.29% 00:20:20.779 cpu : usr=0.00%, sys=1.63%, ctx=1296, majf=0, minf=32769 00:20:20.779 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.779 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525732: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=3, BW=3286KiB/s (3365kB/s)(41.0MiB/12778msec) 00:20:20.779 slat (usec): min=1306, max=2118.8k, avg=260664.15, stdev=682254.06 00:20:20.779 clat (msec): min=2089, max=12772, avg=11377.51, stdev=2801.83 00:20:20.779 lat (msec): min=4183, max=12777, avg=11638.18, stdev=2381.82 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[10671], 00:20:20.779 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:20:20.779 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:20:20.779 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:20:20.779 | 99.99th=[12818] 00:20:20.779 lat (msec) : >=2000=100.00% 00:20:20.779 cpu : usr=0.00%, sys=0.37%, ctx=79, majf=0, minf=10497 00:20:20.779 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.779 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525733: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=70, BW=70.0MiB/s (73.4MB/s)(748MiB/10684msec) 00:20:20.779 slat (usec): min=49, max=1997.8k, avg=14206.23, stdev=104640.48 00:20:20.779 clat (msec): min=53, max=5611, avg=1138.42, stdev=883.20 00:20:20.779 lat (msec): min=483, max=5618, avg=1152.63, stdev=897.60 00:20:20.779 clat percentiles (msec): 00:20:20.779 | 1.00th=[ 485], 5.00th=[ 506], 10.00th=[ 542], 20.00th=[ 617], 00:20:20.779 | 30.00th=[ 693], 40.00th=[ 802], 50.00th=[ 911], 60.00th=[ 1011], 00:20:20.779 | 70.00th=[ 1116], 80.00th=[ 1385], 90.00th=[ 1938], 95.00th=[ 2333], 00:20:20.779 | 99.00th=[ 5604], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:20:20.779 | 99.99th=[ 5604] 00:20:20.779 bw ( KiB/s): min=46823, max=245760, per=4.63%, avg=140962.78, stdev=66933.94, samples=9 00:20:20.779 iops : min= 45, max= 240, avg=137.44, stdev=65.35, samples=9 00:20:20.779 lat (msec) : 100=0.13%, 500=2.67%, 750=32.89%, 1000=22.19%, 2000=33.02% 00:20:20.779 lat (msec) : >=2000=9.09% 00:20:20.779 cpu : usr=0.04%, sys=1.35%, ctx=1035, majf=0, minf=32769 00:20:20.779 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:20:20.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.779 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.779 issued rwts: total=748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.779 job0: (groupid=0, jobs=1): err= 0: pid=2525734: Mon Jul 15 13:50:45 2024 00:20:20.779 read: IOPS=89, BW=90.0MiB/s (94.4MB/s)(1151MiB/12791msec) 00:20:20.779 slat (usec): min=48, max=2168.3k, avg=9277.66, stdev=109832.62 00:20:20.779 clat (msec): min=129, max=10737, avg=1382.81, stdev=2324.08 00:20:20.779 lat (msec): min=130, max=10738, avg=1392.08, stdev=2331.36 00:20:20.779 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 131], 5.00th=[ 132], 10.00th=[ 138], 20.00th=[ 194], 00:20:20.780 | 30.00th=[ 249], 40.00th=[ 279], 50.00th=[ 380], 60.00th=[ 477], 00:20:20.780 | 70.00th=[ 567], 80.00th=[ 2601], 90.00th=[ 7215], 95.00th=[ 7483], 00:20:20.780 | 99.00th=[ 7684], 99.50th=[10671], 99.90th=[10671], 99.95th=[10805], 00:20:20.780 | 99.99th=[10805] 00:20:20.780 bw ( KiB/s): min= 1921, max=743424, per=5.74%, avg=174692.75, stdev=223787.04, samples=12 00:20:20.780 iops : min= 1, max= 726, avg=170.42, stdev=218.65, samples=12 00:20:20.780 lat (msec) : 250=30.67%, 500=33.10%, 750=14.07%, 2000=0.35%, >=2000=21.81% 00:20:20.780 cpu : usr=0.01%, sys=1.48%, ctx=1406, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.780 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525735: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=54, BW=55.0MiB/s (57.6MB/s)(585MiB/10645msec) 00:20:20.780 slat (usec): min=48, max=2107.6k, avg=17091.36, stdev=166679.74 00:20:20.780 clat (msec): min=202, max=9050, avg=927.64, stdev=1995.69 00:20:20.780 lat (msec): min=205, max=9055, avg=944.74, stdev=2026.27 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 209], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 249], 00:20:20.780 | 30.00th=[ 251], 40.00th=[ 251], 50.00th=[ 305], 60.00th=[ 418], 00:20:20.780 | 70.00th=[ 550], 80.00th=[ 693], 90.00th=[ 810], 95.00th=[ 8792], 00:20:20.780 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:20:20.780 | 99.99th=[ 9060] 00:20:20.780 bw ( KiB/s): min=16384, max=536284, per=10.21%, avg=310516.00, stdev=266606.88, samples=3 00:20:20.780 iops : min= 16, max= 523, avg=303.00, stdev=260.06, samples=3 00:20:20.780 lat (msec) : 250=29.40%, 500=35.90%, 750=20.00%, 1000=7.86%, >=2000=6.84% 00:20:20.780 cpu : usr=0.00%, sys=1.08%, ctx=1108, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.780 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525736: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=134, BW=134MiB/s (141MB/s)(1348MiB/10029msec) 00:20:20.780 slat (usec): min=40, max=1850.3k, avg=7414.88, stdev=51450.12 00:20:20.780 clat (msec): min=28, max=2920, avg=731.72, stdev=733.78 00:20:20.780 lat (msec): min=30, max=2935, avg=739.13, stdev=740.32 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 69], 5.00th=[ 130], 10.00th=[ 186], 20.00th=[ 259], 00:20:20.780 | 30.00th=[ 262], 40.00th=[ 264], 50.00th=[ 266], 60.00th=[ 334], 00:20:20.780 | 70.00th=[ 1003], 80.00th=[ 1301], 90.00th=[ 1972], 95.00th=[ 2400], 00:20:20.780 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2937], 00:20:20.780 | 99.99th=[ 2937] 00:20:20.780 bw ( KiB/s): min=34816, max=673792, per=5.48%, avg=166707.20, stdev=206579.59, samples=15 00:20:20.780 iops : min= 34, max= 658, avg=162.80, stdev=201.74, samples=15 00:20:20.780 lat (msec) : 50=0.67%, 100=0.52%, 250=14.02%, 500=46.44%, 750=3.04% 00:20:20.780 lat (msec) : 1000=5.34%, 2000=20.25%, >=2000=9.72% 00:20:20.780 cpu : usr=0.08%, sys=1.67%, ctx=2539, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.780 issued rwts: total=1348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525737: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=3, BW=4016KiB/s (4113kB/s)(50.0MiB/12748msec) 00:20:20.780 slat (usec): min=792, max=2118.5k, avg=212771.23, stdev=620536.74 00:20:20.780 clat (msec): min=2109, max=12744, avg=10503.26, stdev=3371.45 00:20:20.780 lat (msec): min=4181, max=12747, avg=10716.03, stdev=3159.96 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:20.780 | 30.00th=[10671], 40.00th=[12550], 50.00th=[12684], 60.00th=[12684], 00:20:20.780 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.780 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.780 | 99.99th=[12684] 00:20:20.780 lat (msec) : >=2000=100.00% 00:20:20.780 cpu : usr=0.01%, sys=0.39%, ctx=77, majf=0, minf=12801 00:20:20.780 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.780 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525738: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=5, BW=5169KiB/s (5293kB/s)(64.0MiB/12678msec) 00:20:20.780 slat (usec): min=832, max=2092.7k, avg=164893.30, stdev=542647.06 00:20:20.780 clat (msec): min=2123, max=12674, avg=9280.75, stdev=3490.05 00:20:20.780 lat (msec): min=4165, max=12677, avg=9445.64, stdev=3394.53 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:20:20.780 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12550], 00:20:20.780 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.780 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.780 | 99.99th=[12684] 00:20:20.780 lat (msec) : >=2000=100.00% 00:20:20.780 cpu : usr=0.00%, sys=0.47%, ctx=76, majf=0, minf=16385 00:20:20.780 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.780 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525739: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=108, BW=108MiB/s (113MB/s)(1084MiB/10036msec) 00:20:20.780 slat (usec): min=38, max=1403.4k, avg=9237.00, stdev=60774.01 00:20:20.780 clat (msec): min=17, max=3027, avg=1127.31, stdev=661.85 00:20:20.780 lat (msec): min=44, max=3870, avg=1136.55, stdev=666.61 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 51], 5.00th=[ 292], 10.00th=[ 542], 20.00th=[ 617], 00:20:20.780 | 30.00th=[ 693], 40.00th=[ 760], 50.00th=[ 802], 60.00th=[ 869], 00:20:20.780 | 70.00th=[ 1821], 80.00th=[ 1921], 90.00th=[ 2089], 95.00th=[ 2123], 00:20:20.780 | 99.00th=[ 2165], 99.50th=[ 2937], 99.90th=[ 3037], 99.95th=[ 3037], 00:20:20.780 | 99.99th=[ 3037] 00:20:20.780 bw ( KiB/s): min=12288, max=212992, per=4.29%, avg=130452.53, stdev=51825.17, samples=15 00:20:20.780 iops : min= 12, max= 208, avg=127.33, stdev=50.54, samples=15 00:20:20.780 lat (msec) : 20=0.09%, 50=0.83%, 100=0.83%, 250=2.49%, 500=4.15% 00:20:20.780 lat (msec) : 750=30.35%, 1000=24.82%, 2000=22.05%, >=2000=14.39% 00:20:20.780 cpu : usr=0.01%, sys=1.76%, ctx=998, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.780 issued rwts: total=1084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525740: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=13, BW=13.2MiB/s (13.9MB/s)(140MiB/10597msec) 00:20:20.780 slat (usec): min=439, max=2114.8k, avg=75444.98, stdev=340624.81 00:20:20.780 clat (msec): min=33, max=8566, avg=3115.75, stdev=1280.64 00:20:20.780 lat (msec): min=1940, max=8579, avg=3191.20, stdev=1376.31 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 1921], 5.00th=[ 2005], 10.00th=[ 2165], 20.00th=[ 2366], 00:20:20.780 | 30.00th=[ 2534], 40.00th=[ 2668], 50.00th=[ 2769], 60.00th=[ 2937], 00:20:20.780 | 70.00th=[ 3138], 80.00th=[ 3507], 90.00th=[ 3977], 95.00th=[ 6342], 00:20:20.780 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:20:20.780 | 99.99th=[ 8557] 00:20:20.780 bw ( KiB/s): min=24576, max=24576, per=0.81%, avg=24576.00, stdev= 0.00, samples=1 00:20:20.780 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=1 00:20:20.780 lat (msec) : 50=0.71%, 2000=3.57%, >=2000=95.71% 00:20:20.780 cpu : usr=0.00%, sys=0.78%, ctx=437, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.7%, 16=11.4%, 32=22.9%, >=64=55.0% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=92.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.1% 00:20:20.780 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525741: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=2, BW=2492KiB/s (2552kB/s)(31.0MiB/12738msec) 00:20:20.780 slat (usec): min=1632, max=2134.7k, avg=343260.07, stdev=766475.65 00:20:20.780 clat (msec): min=2096, max=12731, avg=9896.60, stdev=3429.70 00:20:20.780 lat (msec): min=4171, max=12737, avg=10239.86, stdev=3143.61 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:20.780 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:20:20.780 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.780 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.780 | 99.99th=[12684] 00:20:20.780 lat (msec) : >=2000=100.00% 00:20:20.780 cpu : usr=0.00%, sys=0.24%, ctx=72, majf=0, minf=7937 00:20:20.780 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.780 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525742: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=54, BW=54.4MiB/s (57.1MB/s)(545MiB/10016msec) 00:20:20.780 slat (usec): min=393, max=2041.3k, avg=18345.98, stdev=170405.02 00:20:20.780 clat (msec): min=14, max=8975, avg=1132.94, stdev=2375.18 00:20:20.780 lat (msec): min=16, max=8979, avg=1151.28, stdev=2399.01 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 26], 5.00th=[ 73], 10.00th=[ 127], 20.00th=[ 236], 00:20:20.780 | 30.00th=[ 262], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 266], 00:20:20.780 | 70.00th=[ 376], 80.00th=[ 502], 90.00th=[ 4933], 95.00th=[ 8926], 00:20:20.780 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:20:20.780 | 99.99th=[ 8926] 00:20:20.780 bw ( KiB/s): min=34816, max=333824, per=6.06%, avg=184320.00, stdev=211430.58, samples=2 00:20:20.780 iops : min= 34, max= 326, avg=180.00, stdev=206.48, samples=2 00:20:20.780 lat (msec) : 20=0.55%, 50=2.57%, 100=4.59%, 250=13.39%, 500=58.53% 00:20:20.780 lat (msec) : 750=7.52%, 1000=0.37%, >=2000=12.48% 00:20:20.780 cpu : usr=0.00%, sys=1.33%, ctx=1064, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.780 issued rwts: total=545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525743: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=49, BW=50.0MiB/s (52.4MB/s)(637MiB/12751msec) 00:20:20.780 slat (usec): min=53, max=2095.0k, avg=16705.04, stdev=158879.36 00:20:20.780 clat (msec): min=241, max=12678, avg=2202.14, stdev=3286.55 00:20:20.780 lat (msec): min=257, max=12679, avg=2218.84, stdev=3296.24 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 257], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 326], 00:20:20.780 | 30.00th=[ 464], 40.00th=[ 518], 50.00th=[ 531], 60.00th=[ 542], 00:20:20.780 | 70.00th=[ 550], 80.00th=[ 4279], 90.00th=[ 8792], 95.00th=[ 8926], 00:20:20.780 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[12684], 99.95th=[12684], 00:20:20.780 | 99.99th=[12684] 00:20:20.780 bw ( KiB/s): min= 2048, max=278528, per=3.82%, avg=116053.33, stdev=117264.77, samples=9 00:20:20.780 iops : min= 2, max= 272, avg=113.33, stdev=114.52, samples=9 00:20:20.780 lat (msec) : 250=0.16%, 500=32.34%, 750=41.60%, >=2000=25.90% 00:20:20.780 cpu : usr=0.02%, sys=1.23%, ctx=969, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.780 issued rwts: total=637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525744: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=134, BW=134MiB/s (141MB/s)(1346MiB/10043msec) 00:20:20.780 slat (usec): min=38, max=1937.2k, avg=7431.33, stdev=53902.23 00:20:20.780 clat (msec): min=35, max=2722, avg=689.71, stdev=661.20 00:20:20.780 lat (msec): min=47, max=3114, avg=697.14, stdev=668.82 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 232], 5.00th=[ 253], 10.00th=[ 257], 20.00th=[ 262], 00:20:20.780 | 30.00th=[ 279], 40.00th=[ 351], 50.00th=[ 388], 60.00th=[ 443], 00:20:20.780 | 70.00th=[ 550], 80.00th=[ 953], 90.00th=[ 2022], 95.00th=[ 2232], 00:20:20.780 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2735], 00:20:20.780 | 99.99th=[ 2735] 00:20:20.780 bw ( KiB/s): min=22528, max=475136, per=5.85%, avg=178014.57, stdev=157541.62, samples=14 00:20:20.780 iops : min= 22, max= 464, avg=173.79, stdev=153.85, samples=14 00:20:20.780 lat (msec) : 50=0.15%, 100=0.30%, 250=3.86%, 500=62.18%, 750=9.29% 00:20:20.780 lat (msec) : 1000=4.75%, 2000=9.06%, >=2000=10.40% 00:20:20.780 cpu : usr=0.10%, sys=1.67%, ctx=2491, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.780 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525745: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=158, BW=159MiB/s (166MB/s)(1594MiB/10041msec) 00:20:20.780 slat (usec): min=45, max=98935, avg=6271.92, stdev=11762.02 00:20:20.780 clat (msec): min=36, max=2622, avg=770.47, stdev=628.74 00:20:20.780 lat (msec): min=46, max=2632, avg=776.74, stdev=632.73 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 228], 5.00th=[ 236], 10.00th=[ 247], 20.00th=[ 255], 00:20:20.780 | 30.00th=[ 259], 40.00th=[ 355], 50.00th=[ 575], 60.00th=[ 760], 00:20:20.780 | 70.00th=[ 927], 80.00th=[ 1183], 90.00th=[ 1938], 95.00th=[ 2198], 00:20:20.780 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2635], 00:20:20.780 | 99.99th=[ 2635] 00:20:20.780 bw ( KiB/s): min=26624, max=546816, per=5.20%, avg=158045.21, stdev=148155.57, samples=19 00:20:20.780 iops : min= 26, max= 534, avg=154.32, stdev=144.70, samples=19 00:20:20.780 lat (msec) : 50=0.13%, 100=0.13%, 250=15.68%, 500=28.11%, 750=15.75% 00:20:20.780 lat (msec) : 1000=15.62%, 2000=16.56%, >=2000=8.03% 00:20:20.780 cpu : usr=0.07%, sys=2.05%, ctx=2446, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.780 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.780 job1: (groupid=0, jobs=1): err= 0: pid=2525746: Mon Jul 15 13:50:45 2024 00:20:20.780 read: IOPS=35, BW=35.6MiB/s (37.3MB/s)(453MiB/12719msec) 00:20:20.780 slat (usec): min=512, max=2061.2k, avg=23409.79, stdev=186918.69 00:20:20.780 clat (msec): min=501, max=9053, avg=2985.92, stdev=3573.83 00:20:20.780 lat (msec): min=514, max=9055, avg=3009.33, stdev=3581.29 00:20:20.780 clat percentiles (msec): 00:20:20.780 | 1.00th=[ 514], 5.00th=[ 514], 10.00th=[ 518], 20.00th=[ 518], 00:20:20.780 | 30.00th=[ 523], 40.00th=[ 531], 50.00th=[ 542], 60.00th=[ 651], 00:20:20.780 | 70.00th=[ 2836], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 8926], 00:20:20.780 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:20:20.780 | 99.99th=[ 9060] 00:20:20.780 bw ( KiB/s): min= 1460, max=249856, per=2.74%, avg=83379.88, stdev=106493.55, samples=8 00:20:20.780 iops : min= 1, max= 244, avg=81.25, stdev=104.14, samples=8 00:20:20.780 lat (msec) : 750=61.59%, 1000=2.21%, >=2000=36.20% 00:20:20.780 cpu : usr=0.01%, sys=0.90%, ctx=886, majf=0, minf=32769 00:20:20.780 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:20:20.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.780 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:20.780 issued rwts: total=453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job1: (groupid=0, jobs=1): err= 0: pid=2525747: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=33, BW=33.4MiB/s (35.0MB/s)(424MiB/12687msec) 00:20:20.781 slat (usec): min=108, max=2081.7k, avg=24957.44, stdev=186635.98 00:20:20.781 clat (msec): min=510, max=9514, avg=3420.27, stdev=3459.59 00:20:20.781 lat (msec): min=512, max=9522, avg=3445.23, stdev=3467.36 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 514], 5.00th=[ 518], 10.00th=[ 531], 20.00th=[ 567], 00:20:20.781 | 30.00th=[ 726], 40.00th=[ 961], 50.00th=[ 2056], 60.00th=[ 2232], 00:20:20.781 | 70.00th=[ 4212], 80.00th=[ 8658], 90.00th=[ 9194], 95.00th=[ 9329], 00:20:20.781 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:20:20.781 | 99.99th=[ 9463] 00:20:20.781 bw ( KiB/s): min= 1532, max=227328, per=2.22%, avg=67509.78, stdev=73719.42, samples=9 00:20:20.781 iops : min= 1, max= 222, avg=65.78, stdev=72.04, samples=9 00:20:20.781 lat (msec) : 750=31.37%, 1000=9.91%, 2000=8.49%, >=2000=50.24% 00:20:20.781 cpu : usr=0.02%, sys=0.80%, ctx=802, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.1% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:20.781 issued rwts: total=424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525748: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=2, BW=2414KiB/s (2471kB/s)(25.0MiB/10607msec) 00:20:20.781 slat (msec): min=3, max=2072, avg=421.18, stdev=819.93 00:20:20.781 clat (msec): min=77, max=10521, avg=5941.19, stdev=2696.80 00:20:20.781 lat (msec): min=2143, max=10606, avg=6362.37, stdev=2561.65 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 78], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:20:20.781 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 6477], 00:20:20.781 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[ 8658], 00:20:20.781 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:20.781 | 99.99th=[10537] 00:20:20.781 lat (msec) : 100=4.00%, >=2000=96.00% 00:20:20.781 cpu : usr=0.00%, sys=0.22%, ctx=73, majf=0, minf=6401 00:20:20.781 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.781 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525749: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=4, BW=4275KiB/s (4377kB/s)(53.0MiB/12696msec) 00:20:20.781 slat (usec): min=814, max=2068.9k, avg=199203.57, stdev=585915.44 00:20:20.781 clat (msec): min=2137, max=12693, avg=8767.92, stdev=3553.52 00:20:20.781 lat (msec): min=4059, max=12695, avg=8967.12, stdev=3469.60 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 2140], 5.00th=[ 4077], 10.00th=[ 4111], 20.00th=[ 4178], 00:20:20.781 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:20:20.781 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.781 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.781 | 99.99th=[12684] 00:20:20.781 lat (msec) : >=2000=100.00% 00:20:20.781 cpu : usr=0.00%, sys=0.35%, ctx=92, majf=0, minf=13569 00:20:20.781 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.781 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525750: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(516MiB/12709msec) 00:20:20.781 slat (usec): min=82, max=2099.0k, avg=20542.48, stdev=156241.23 00:20:20.781 clat (msec): min=255, max=8474, avg=2988.97, stdev=2432.50 00:20:20.781 lat (msec): min=257, max=8481, avg=3009.51, stdev=2447.64 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 257], 5.00th=[ 259], 10.00th=[ 275], 20.00th=[ 334], 00:20:20.781 | 30.00th=[ 1083], 40.00th=[ 1116], 50.00th=[ 2500], 60.00th=[ 3574], 00:20:20.781 | 70.00th=[ 3675], 80.00th=[ 5537], 90.00th=[ 7080], 95.00th=[ 7215], 00:20:20.781 | 99.00th=[ 7215], 99.50th=[ 7282], 99.90th=[ 8490], 99.95th=[ 8490], 00:20:20.781 | 99.99th=[ 8490] 00:20:20.781 bw ( KiB/s): min= 1460, max=270336, per=2.62%, avg=79595.60, stdev=94555.57, samples=10 00:20:20.781 iops : min= 1, max= 264, avg=77.50, stdev=92.49, samples=10 00:20:20.781 lat (msec) : 500=22.29%, 2000=21.32%, >=2000=56.40% 00:20:20.781 cpu : usr=0.02%, sys=1.06%, ctx=793, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.8% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:20.781 issued rwts: total=516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525751: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=1, BW=1378KiB/s (1411kB/s)(17.0MiB/12631msec) 00:20:20.781 slat (msec): min=8, max=2135, avg=618.46, stdev=946.62 00:20:20.781 clat (msec): min=2116, max=12555, avg=6342.82, stdev=2797.84 00:20:20.781 lat (msec): min=4159, max=12630, avg=6961.28, stdev=2962.36 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4144], 20.00th=[ 4178], 00:20:20.781 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6342], 60.00th=[ 6409], 00:20:20.781 | 70.00th=[ 6409], 80.00th=[ 8490], 90.00th=[10671], 95.00th=[12550], 00:20:20.781 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:20:20.781 | 99.99th=[12550] 00:20:20.781 lat (msec) : >=2000=100.00% 00:20:20.781 cpu : usr=0.00%, sys=0.11%, ctx=55, majf=0, minf=4353 00:20:20.781 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.781 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525752: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=4, BW=4818KiB/s (4934kB/s)(50.0MiB/10626msec) 00:20:20.781 slat (usec): min=472, max=2063.0k, avg=210329.61, stdev=600752.20 00:20:20.781 clat (msec): min=108, max=10624, avg=7224.55, stdev=3032.41 00:20:20.781 lat (msec): min=2129, max=10625, avg=7434.88, stdev=2890.14 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 109], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:20:20.781 | 30.00th=[ 6342], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:20:20.781 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:20:20.781 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.781 | 99.99th=[10671] 00:20:20.781 lat (msec) : 250=2.00%, >=2000=98.00% 00:20:20.781 cpu : usr=0.00%, sys=0.41%, ctx=85, majf=0, minf=12801 00:20:20.781 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.781 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525753: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=21, BW=21.6MiB/s (22.6MB/s)(274MiB/12714msec) 00:20:20.781 slat (usec): min=40, max=2083.8k, avg=38696.82, stdev=240573.51 00:20:20.781 clat (msec): min=1078, max=11389, avg=5624.68, stdev=4232.66 00:20:20.781 lat (msec): min=1083, max=11397, avg=5663.38, stdev=4237.77 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 1083], 5.00th=[ 1099], 10.00th=[ 1099], 20.00th=[ 1133], 00:20:20.781 | 30.00th=[ 1150], 40.00th=[ 3171], 50.00th=[ 4279], 60.00th=[ 7349], 00:20:20.781 | 70.00th=[10537], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:20:20.781 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:20:20.781 | 99.99th=[11342] 00:20:20.781 bw ( KiB/s): min= 1460, max=122880, per=1.23%, avg=37550.25, stdev=42288.60, samples=8 00:20:20.781 iops : min= 1, max= 120, avg=36.38, stdev=41.49, samples=8 00:20:20.781 lat (msec) : 2000=37.59%, >=2000=62.41% 00:20:20.781 cpu : usr=0.03%, sys=0.83%, ctx=485, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.7%, >=64=77.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:20:20.781 issued rwts: total=274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525754: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=3, BW=3091KiB/s (3166kB/s)(32.0MiB/10600msec) 00:20:20.781 slat (usec): min=681, max=2076.2k, avg=328773.72, stdev=741889.83 00:20:20.781 clat (msec): min=78, max=10585, avg=5706.00, stdev=3076.71 00:20:20.781 lat (msec): min=2092, max=10598, avg=6034.77, stdev=3017.47 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 79], 5.00th=[ 2089], 10.00th=[ 2165], 20.00th=[ 2198], 00:20:20.781 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:20:20.781 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[ 8658], 95.00th=[10537], 00:20:20.781 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:20.781 | 99.99th=[10537] 00:20:20.781 lat (msec) : 100=3.12%, >=2000=96.88% 00:20:20.781 cpu : usr=0.00%, sys=0.21%, ctx=80, majf=0, minf=8193 00:20:20.781 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.781 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525755: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=98, BW=98.9MiB/s (104MB/s)(1058MiB/10697msec) 00:20:20.781 slat (usec): min=40, max=2037.0k, avg=10000.88, stdev=106589.56 00:20:20.781 clat (msec): min=110, max=6964, avg=1220.11, stdev=1991.95 00:20:20.781 lat (msec): min=125, max=6966, avg=1230.11, stdev=1998.36 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 128], 5.00th=[ 169], 10.00th=[ 226], 20.00th=[ 305], 00:20:20.781 | 30.00th=[ 326], 40.00th=[ 498], 50.00th=[ 531], 60.00th=[ 550], 00:20:20.781 | 70.00th=[ 625], 80.00th=[ 768], 90.00th=[ 6544], 95.00th=[ 6745], 00:20:20.781 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:20:20.781 | 99.99th=[ 6946] 00:20:20.781 bw ( KiB/s): min= 4096, max=579584, per=5.69%, avg=173149.09, stdev=182130.11, samples=11 00:20:20.781 iops : min= 4, max= 566, avg=169.09, stdev=177.86, samples=11 00:20:20.781 lat (msec) : 250=12.29%, 500=27.88%, 750=38.19%, 1000=8.60%, >=2000=13.04% 00:20:20.781 cpu : usr=0.05%, sys=1.55%, ctx=1149, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.781 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525756: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=44, BW=44.9MiB/s (47.1MB/s)(569MiB/12661msec) 00:20:20.781 slat (usec): min=59, max=2107.5k, avg=18490.28, stdev=133515.26 00:20:20.781 clat (msec): min=421, max=5584, avg=1745.57, stdev=1235.86 00:20:20.781 lat (msec): min=423, max=7330, avg=1764.06, stdev=1260.76 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 435], 5.00th=[ 456], 10.00th=[ 485], 20.00th=[ 592], 00:20:20.781 | 30.00th=[ 802], 40.00th=[ 1062], 50.00th=[ 1234], 60.00th=[ 1804], 00:20:20.781 | 70.00th=[ 2089], 80.00th=[ 3339], 90.00th=[ 3742], 95.00th=[ 3842], 00:20:20.781 | 99.00th=[ 5134], 99.50th=[ 5470], 99.90th=[ 5604], 99.95th=[ 5604], 00:20:20.781 | 99.99th=[ 5604] 00:20:20.781 bw ( KiB/s): min= 1532, max=210944, per=3.72%, avg=113087.50, stdev=78601.18, samples=8 00:20:20.781 iops : min= 1, max= 206, avg=110.38, stdev=76.86, samples=8 00:20:20.781 lat (msec) : 500=12.65%, 750=14.41%, 1000=9.67%, 2000=31.46%, >=2000=31.81% 00:20:20.781 cpu : usr=0.02%, sys=1.03%, ctx=721, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.781 issued rwts: total=569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525757: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=18, BW=18.7MiB/s (19.6MB/s)(236MiB/12631msec) 00:20:20.781 slat (usec): min=91, max=2113.3k, avg=44528.86, stdev=264318.78 00:20:20.781 clat (msec): min=833, max=11755, avg=6501.18, stdev=3428.48 00:20:20.781 lat (msec): min=841, max=11757, avg=6545.71, stdev=3432.28 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 835], 5.00th=[ 869], 10.00th=[ 944], 20.00th=[ 3943], 00:20:20.781 | 30.00th=[ 4111], 40.00th=[ 4178], 50.00th=[ 7819], 60.00th=[ 7886], 00:20:20.781 | 70.00th=[ 7953], 80.00th=[10671], 90.00th=[11610], 95.00th=[11745], 00:20:20.781 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:20:20.781 | 99.99th=[11745] 00:20:20.781 bw ( KiB/s): min= 2048, max=92160, per=0.81%, avg=24786.22, stdev=29194.28, samples=9 00:20:20.781 iops : min= 2, max= 90, avg=24.00, stdev=28.39, samples=9 00:20:20.781 lat (msec) : 1000=10.59%, 2000=2.54%, >=2000=86.86% 00:20:20.781 cpu : usr=0.00%, sys=0.68%, ctx=391, majf=0, minf=32769 00:20:20.781 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.3% 00:20:20.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.781 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:20:20.781 issued rwts: total=236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.781 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.781 job2: (groupid=0, jobs=1): err= 0: pid=2525758: Mon Jul 15 13:50:45 2024 00:20:20.781 read: IOPS=6, BW=6434KiB/s (6588kB/s)(67.0MiB/10664msec) 00:20:20.781 slat (usec): min=409, max=2094.1k, avg=158064.80, stdev=522491.92 00:20:20.781 clat (msec): min=73, max=10661, avg=7596.79, stdev=2352.79 00:20:20.781 lat (msec): min=2088, max=10663, avg=7754.85, stdev=2189.78 00:20:20.781 clat percentiles (msec): 00:20:20.781 | 1.00th=[ 73], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:20:20.781 | 30.00th=[ 8356], 40.00th=[ 8356], 50.00th=[ 8490], 60.00th=[ 8490], 00:20:20.782 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10671], 00:20:20.782 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.782 | 99.99th=[10671] 00:20:20.782 lat (msec) : 100=1.49%, >=2000=98.51% 00:20:20.782 cpu : usr=0.00%, sys=0.44%, ctx=90, majf=0, minf=17153 00:20:20.782 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.782 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job2: (groupid=0, jobs=1): err= 0: pid=2525759: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=3, BW=3554KiB/s (3639kB/s)(44.0MiB/12679msec) 00:20:20.782 slat (usec): min=753, max=2099.1k, avg=240290.07, stdev=650820.69 00:20:20.782 clat (msec): min=2105, max=12677, avg=10199.49, stdev=3323.18 00:20:20.782 lat (msec): min=4170, max=12678, avg=10439.78, stdev=3098.98 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:20:20.782 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12550], 60.00th=[12550], 00:20:20.782 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.782 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.782 | 99.99th=[12684] 00:20:20.782 lat (msec) : >=2000=100.00% 00:20:20.782 cpu : usr=0.00%, sys=0.24%, ctx=75, majf=0, minf=11265 00:20:20.782 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.782 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job2: (groupid=0, jobs=1): err= 0: pid=2525760: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=12, BW=12.5MiB/s (13.1MB/s)(160MiB/12779msec) 00:20:20.782 slat (usec): min=686, max=2097.6k, avg=66606.96, stdev=323059.28 00:20:20.782 clat (msec): min=2120, max=12704, avg=7992.02, stdev=2150.29 00:20:20.782 lat (msec): min=4218, max=12717, avg=8058.63, stdev=2130.92 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 4212], 5.00th=[ 5201], 10.00th=[ 5336], 20.00th=[ 6342], 00:20:20.782 | 30.00th=[ 7416], 40.00th=[ 7617], 50.00th=[ 7819], 60.00th=[ 8020], 00:20:20.782 | 70.00th=[ 8221], 80.00th=[ 8490], 90.00th=[12550], 95.00th=[12684], 00:20:20.782 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.782 | 99.99th=[12684] 00:20:20.782 bw ( KiB/s): min= 1921, max=53248, per=0.44%, avg=13491.40, stdev=22244.56, samples=5 00:20:20.782 iops : min= 1, max= 52, avg=13.00, stdev=21.84, samples=5 00:20:20.782 lat (msec) : >=2000=100.00% 00:20:20.782 cpu : usr=0.00%, sys=1.01%, ctx=272, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:20:20.782 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525761: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=66, BW=66.1MiB/s (69.3MB/s)(839MiB/12690msec) 00:20:20.782 slat (usec): min=44, max=2155.0k, avg=12605.70, stdev=123653.81 00:20:20.782 clat (msec): min=273, max=9047, avg=1831.58, stdev=2848.89 00:20:20.782 lat (msec): min=274, max=9050, avg=1844.19, stdev=2858.06 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 279], 5.00th=[ 296], 10.00th=[ 309], 20.00th=[ 451], 00:20:20.782 | 30.00th=[ 527], 40.00th=[ 542], 50.00th=[ 584], 60.00th=[ 693], 00:20:20.782 | 70.00th=[ 877], 80.00th=[ 1028], 90.00th=[ 8658], 95.00th=[ 8926], 00:20:20.782 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:20:20.782 | 99.99th=[ 9060] 00:20:20.782 bw ( KiB/s): min= 1532, max=416958, per=3.99%, avg=121402.17, stdev=130194.05, samples=12 00:20:20.782 iops : min= 1, max= 407, avg=118.50, stdev=127.15, samples=12 00:20:20.782 lat (msec) : 500=22.65%, 750=43.03%, 1000=13.11%, 2000=3.93%, >=2000=17.28% 00:20:20.782 cpu : usr=0.00%, sys=1.10%, ctx=1057, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.782 issued rwts: total=839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525762: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=13, BW=14.0MiB/s (14.6MB/s)(149MiB/10677msec) 00:20:20.782 slat (usec): min=59, max=2048.7k, avg=70877.86, stdev=335401.42 00:20:20.782 clat (msec): min=115, max=10523, avg=5096.34, stdev=1969.90 00:20:20.782 lat (msec): min=2163, max=10527, avg=5167.22, stdev=1978.11 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2165], 5.00th=[ 3641], 10.00th=[ 3742], 20.00th=[ 3876], 00:20:20.782 | 30.00th=[ 4010], 40.00th=[ 4144], 50.00th=[ 4178], 60.00th=[ 4329], 00:20:20.782 | 70.00th=[ 4396], 80.00th=[ 7013], 90.00th=[ 8557], 95.00th=[ 8658], 00:20:20.782 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:20.782 | 99.99th=[10537] 00:20:20.782 bw ( KiB/s): min= 2048, max=18432, per=0.35%, avg=10752.00, stdev=8907.43, samples=4 00:20:20.782 iops : min= 2, max= 18, avg=10.50, stdev= 8.70, samples=4 00:20:20.782 lat (msec) : 250=0.67%, >=2000=99.33% 00:20:20.782 cpu : usr=0.00%, sys=0.87%, ctx=131, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=5.4%, 16=10.7%, 32=21.5%, >=64=57.7% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=95.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.3% 00:20:20.782 issued rwts: total=149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525763: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=50, BW=50.8MiB/s (53.2MB/s)(643MiB/12665msec) 00:20:20.782 slat (usec): min=47, max=2137.5k, avg=16402.97, stdev=143660.94 00:20:20.782 clat (msec): min=393, max=8946, avg=2369.05, stdev=3160.37 00:20:20.782 lat (msec): min=394, max=8948, avg=2385.45, stdev=3168.44 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 401], 20.00th=[ 405], 00:20:20.782 | 30.00th=[ 426], 40.00th=[ 600], 50.00th=[ 1083], 60.00th=[ 1267], 00:20:20.782 | 70.00th=[ 1385], 80.00th=[ 4178], 90.00th=[ 8792], 95.00th=[ 8792], 00:20:20.782 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:20:20.782 | 99.99th=[ 8926] 00:20:20.782 bw ( KiB/s): min= 1532, max=317440, per=3.47%, avg=105607.20, stdev=116620.75, samples=10 00:20:20.782 iops : min= 1, max= 310, avg=103.00, stdev=113.95, samples=10 00:20:20.782 lat (msec) : 500=35.93%, 750=9.18%, 1000=4.20%, 2000=30.33%, >=2000=20.37% 00:20:20.782 cpu : usr=0.03%, sys=1.02%, ctx=996, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.782 issued rwts: total=643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525764: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=4, BW=4343KiB/s (4447kB/s)(54.0MiB/12732msec) 00:20:20.782 slat (usec): min=681, max=2109.8k, avg=196629.08, stdev=597245.76 00:20:20.782 clat (msec): min=2113, max=12728, avg=11429.13, stdev=2538.61 00:20:20.782 lat (msec): min=4190, max=12731, avg=11625.76, stdev=2190.81 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 8490], 20.00th=[10671], 00:20:20.782 | 30.00th=[12550], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:20:20.782 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.782 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.782 | 99.99th=[12684] 00:20:20.782 lat (msec) : >=2000=100.00% 00:20:20.782 cpu : usr=0.00%, sys=0.42%, ctx=73, majf=0, minf=13825 00:20:20.782 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.782 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525765: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=11, BW=11.6MiB/s (12.2MB/s)(124MiB/10673msec) 00:20:20.782 slat (usec): min=420, max=2046.4k, avg=85152.27, stdev=366586.49 00:20:20.782 clat (msec): min=113, max=10622, avg=9180.46, stdev=2080.40 00:20:20.782 lat (msec): min=2152, max=10672, avg=9265.62, stdev=1915.84 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2165], 5.00th=[ 4329], 10.00th=[ 6477], 20.00th=[ 9597], 00:20:20.782 | 30.00th=[ 9731], 40.00th=[ 9731], 50.00th=[ 9866], 60.00th=[ 9866], 00:20:20.782 | 70.00th=[10000], 80.00th=[10268], 90.00th=[10402], 95.00th=[10537], 00:20:20.782 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.782 | 99.99th=[10671] 00:20:20.782 lat (msec) : 250=0.81%, >=2000=99.19% 00:20:20.782 cpu : usr=0.00%, sys=0.74%, ctx=330, majf=0, minf=31745 00:20:20.782 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.5%, 16=12.9%, 32=25.8%, >=64=49.2% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.782 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525766: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=165, BW=166MiB/s (174MB/s)(1760MiB/10609msec) 00:20:20.782 slat (usec): min=40, max=2011.9k, avg=5967.93, stdev=48643.75 00:20:20.782 clat (msec): min=98, max=2527, avg=706.99, stdev=501.54 00:20:20.782 lat (msec): min=263, max=2529, avg=712.96, stdev=502.85 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 264], 5.00th=[ 271], 10.00th=[ 284], 20.00th=[ 422], 00:20:20.782 | 30.00th=[ 489], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 617], 00:20:20.782 | 70.00th=[ 676], 80.00th=[ 818], 90.00th=[ 1036], 95.00th=[ 2299], 00:20:20.782 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2534], 00:20:20.782 | 99.99th=[ 2534] 00:20:20.782 bw ( KiB/s): min= 2048, max=485376, per=6.87%, avg=209024.00, stdev=109198.85, samples=16 00:20:20.782 iops : min= 2, max= 474, avg=204.12, stdev=106.64, samples=16 00:20:20.782 lat (msec) : 100=0.06%, 500=30.97%, 750=42.33%, 1000=16.08%, 2000=3.35% 00:20:20.782 lat (msec) : >=2000=7.22% 00:20:20.782 cpu : usr=0.08%, sys=1.73%, ctx=2129, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.782 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525767: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=7, BW=8076KiB/s (8270kB/s)(84.0MiB/10651msec) 00:20:20.782 slat (usec): min=861, max=2076.7k, avg=125596.16, stdev=475426.71 00:20:20.782 clat (msec): min=100, max=10649, avg=6402.67, stdev=3007.21 00:20:20.782 lat (msec): min=2155, max=10650, avg=6528.26, stdev=2960.78 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 101], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4279], 00:20:20.782 | 30.00th=[ 4329], 40.00th=[ 6342], 50.00th=[ 6477], 60.00th=[ 6477], 00:20:20.782 | 70.00th=[ 8557], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:20:20.782 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.782 | 99.99th=[10671] 00:20:20.782 lat (msec) : 250=1.19%, >=2000=98.81% 00:20:20.782 cpu : usr=0.00%, sys=0.75%, ctx=71, majf=0, minf=21505 00:20:20.782 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.782 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525768: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=2, BW=2733KiB/s (2798kB/s)(34.0MiB/12741msec) 00:20:20.782 slat (usec): min=900, max=2121.7k, avg=312559.04, stdev=732807.01 00:20:20.782 clat (msec): min=2113, max=12738, avg=10948.72, stdev=3007.10 00:20:20.782 lat (msec): min=4190, max=12740, avg=11261.28, stdev=2583.33 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490], 00:20:20.782 | 30.00th=[10671], 40.00th=[12550], 50.00th=[12684], 60.00th=[12684], 00:20:20.782 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.782 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.782 | 99.99th=[12684] 00:20:20.782 lat (msec) : >=2000=100.00% 00:20:20.782 cpu : usr=0.00%, sys=0.29%, ctx=76, majf=0, minf=8705 00:20:20.782 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.782 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525769: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=18, BW=19.0MiB/s (19.9MB/s)(204MiB/10752msec) 00:20:20.782 slat (usec): min=82, max=2050.0k, avg=52134.88, stdev=274119.12 00:20:20.782 clat (msec): min=115, max=9961, avg=6214.58, stdev=2786.29 00:20:20.782 lat (msec): min=2142, max=9962, avg=6266.71, stdev=2761.99 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2165], 5.00th=[ 2299], 10.00th=[ 2400], 20.00th=[ 2467], 00:20:20.782 | 30.00th=[ 3842], 40.00th=[ 6275], 50.00th=[ 6544], 60.00th=[ 8221], 00:20:20.782 | 70.00th=[ 8423], 80.00th=[ 8658], 90.00th=[ 9731], 95.00th=[ 9866], 00:20:20.782 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:20:20.782 | 99.99th=[10000] 00:20:20.782 bw ( KiB/s): min= 8192, max=55296, per=1.02%, avg=31125.40, stdev=22081.61, samples=5 00:20:20.782 iops : min= 8, max= 54, avg=30.20, stdev=21.80, samples=5 00:20:20.782 lat (msec) : 250=0.49%, >=2000=99.51% 00:20:20.782 cpu : usr=0.00%, sys=1.14%, ctx=312, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:20:20.782 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525770: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=23, BW=23.7MiB/s (24.9MB/s)(255MiB/10747msec) 00:20:20.782 slat (usec): min=49, max=2106.7k, avg=41806.69, stdev=255230.32 00:20:20.782 clat (msec): min=85, max=9776, avg=5180.05, stdev=3871.63 00:20:20.782 lat (msec): min=965, max=9780, avg=5221.86, stdev=3865.74 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 969], 5.00th=[ 1028], 10.00th=[ 1167], 20.00th=[ 1301], 00:20:20.782 | 30.00th=[ 1351], 40.00th=[ 1385], 50.00th=[ 4329], 60.00th=[ 8658], 00:20:20.782 | 70.00th=[ 9060], 80.00th=[ 9194], 90.00th=[ 9597], 95.00th=[ 9731], 00:20:20.782 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:20:20.782 | 99.99th=[ 9731] 00:20:20.782 bw ( KiB/s): min= 2048, max=98107, per=1.22%, avg=37122.43, stdev=42068.50, samples=7 00:20:20.782 iops : min= 2, max= 95, avg=35.86, stdev=41.10, samples=7 00:20:20.782 lat (msec) : 100=0.39%, 1000=2.35%, 2000=43.92%, >=2000=53.33% 00:20:20.782 cpu : usr=0.00%, sys=1.20%, ctx=447, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.5%, >=64=75.3% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:20:20.782 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525771: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=24, BW=24.2MiB/s (25.4MB/s)(307MiB/12671msec) 00:20:20.782 slat (usec): min=67, max=2090.9k, avg=34394.64, stdev=227809.47 00:20:20.782 clat (msec): min=559, max=10679, avg=3562.28, stdev=3109.77 00:20:20.782 lat (msec): min=559, max=12427, avg=3596.67, stdev=3140.56 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 584], 5.00th=[ 609], 10.00th=[ 634], 20.00th=[ 718], 00:20:20.782 | 30.00th=[ 785], 40.00th=[ 827], 50.00th=[ 869], 60.00th=[ 5201], 00:20:20.782 | 70.00th=[ 6879], 80.00th=[ 7013], 90.00th=[ 7148], 95.00th=[ 7215], 00:20:20.782 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.782 | 99.99th=[10671] 00:20:20.782 bw ( KiB/s): min= 1532, max=182272, per=2.02%, avg=61354.00, stdev=85274.87, samples=6 00:20:20.782 iops : min= 1, max= 178, avg=59.83, stdev=83.35, samples=6 00:20:20.782 lat (msec) : 750=25.41%, 1000=27.36%, >=2000=47.23% 00:20:20.782 cpu : usr=0.00%, sys=0.74%, ctx=325, majf=0, minf=32769 00:20:20.782 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:20:20.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.782 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:20:20.782 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.782 job3: (groupid=0, jobs=1): err= 0: pid=2525772: Mon Jul 15 13:50:45 2024 00:20:20.782 read: IOPS=3, BW=3378KiB/s (3459kB/s)(42.0MiB/12732msec) 00:20:20.782 slat (usec): min=837, max=2155.3k, avg=252867.74, stdev=664836.11 00:20:20.782 clat (msec): min=2110, max=12729, avg=10984.81, stdev=3093.97 00:20:20.782 lat (msec): min=4173, max=12731, avg=11237.68, stdev=2767.80 00:20:20.782 clat percentiles (msec): 00:20:20.782 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8557], 00:20:20.782 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12550], 60.00th=[12684], 00:20:20.782 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:20:20.782 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:20:20.783 | 99.99th=[12684] 00:20:20.783 lat (msec) : >=2000=100.00% 00:20:20.783 cpu : usr=0.00%, sys=0.35%, ctx=69, majf=0, minf=10753 00:20:20.783 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.783 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job3: (groupid=0, jobs=1): err= 0: pid=2525773: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=57, BW=57.4MiB/s (60.2MB/s)(608MiB/10597msec) 00:20:20.783 slat (usec): min=101, max=2143.8k, avg=17284.86, stdev=146632.29 00:20:20.783 clat (msec): min=84, max=7111, avg=2046.14, stdev=2438.35 00:20:20.783 lat (msec): min=492, max=7115, avg=2063.43, stdev=2443.02 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 493], 5.00th=[ 502], 10.00th=[ 514], 20.00th=[ 542], 00:20:20.783 | 30.00th=[ 567], 40.00th=[ 659], 50.00th=[ 978], 60.00th=[ 1083], 00:20:20.783 | 70.00th=[ 1133], 80.00th=[ 6409], 90.00th=[ 6879], 95.00th=[ 7013], 00:20:20.783 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:20:20.783 | 99.99th=[ 7080] 00:20:20.783 bw ( KiB/s): min= 4087, max=243712, per=3.59%, avg=109217.89, stdev=95289.61, samples=9 00:20:20.783 iops : min= 3, max= 238, avg=106.44, stdev=93.28, samples=9 00:20:20.783 lat (msec) : 100=0.16%, 500=4.11%, 750=39.80%, 1000=6.25%, 2000=27.96% 00:20:20.783 lat (msec) : >=2000=21.71% 00:20:20.783 cpu : usr=0.07%, sys=1.29%, ctx=1180, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.783 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525774: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=66, BW=66.6MiB/s (69.8MB/s)(709MiB/10651msec) 00:20:20.783 slat (usec): min=419, max=2017.0k, avg=14846.50, stdev=143029.86 00:20:20.783 clat (msec): min=122, max=6442, avg=1185.36, stdev=1262.66 00:20:20.783 lat (msec): min=312, max=6452, avg=1200.20, stdev=1286.27 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 313], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 338], 00:20:20.783 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 397], 60.00th=[ 485], 00:20:20.783 | 70.00th=[ 2299], 80.00th=[ 2500], 90.00th=[ 2769], 95.00th=[ 4212], 00:20:20.783 | 99.00th=[ 4396], 99.50th=[ 6342], 99.90th=[ 6477], 99.95th=[ 6477], 00:20:20.783 | 99.99th=[ 6477] 00:20:20.783 bw ( KiB/s): min= 2048, max=385024, per=6.52%, avg=198314.67, stdev=166578.92, samples=6 00:20:20.783 iops : min= 2, max= 376, avg=193.67, stdev=162.67, samples=6 00:20:20.783 lat (msec) : 250=0.14%, 500=65.02%, 750=2.26%, 1000=0.42%, >=2000=32.16% 00:20:20.783 cpu : usr=0.06%, sys=0.96%, ctx=1745, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.783 issued rwts: total=709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525775: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=113, BW=113MiB/s (119MB/s)(1134MiB/10016msec) 00:20:20.783 slat (usec): min=66, max=2024.9k, avg=8815.02, stdev=96061.34 00:20:20.783 clat (msec): min=14, max=4900, avg=928.33, stdev=1412.83 00:20:20.783 lat (msec): min=15, max=4906, avg=937.14, stdev=1418.94 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 27], 5.00th=[ 75], 10.00th=[ 243], 20.00th=[ 249], 00:20:20.783 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 288], 00:20:20.783 | 70.00th=[ 575], 80.00th=[ 760], 90.00th=[ 4463], 95.00th=[ 4732], 00:20:20.783 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4933], 00:20:20.783 | 99.99th=[ 4933] 00:20:20.783 bw ( KiB/s): min=10240, max=514048, per=6.36%, avg=193536.00, stdev=187210.05, samples=8 00:20:20.783 iops : min= 10, max= 502, avg=189.00, stdev=182.82, samples=8 00:20:20.783 lat (msec) : 20=0.53%, 50=2.73%, 100=2.73%, 250=18.25%, 500=38.71% 00:20:20.783 lat (msec) : 750=17.02%, 1000=2.91%, 2000=1.23%, >=2000=15.87% 00:20:20.783 cpu : usr=0.04%, sys=1.93%, ctx=2137, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525776: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=41, BW=41.5MiB/s (43.5MB/s)(446MiB/10739msec) 00:20:20.783 slat (usec): min=69, max=1995.9k, avg=23811.71, stdev=172342.33 00:20:20.783 clat (msec): min=116, max=8736, avg=2700.30, stdev=2047.14 00:20:20.783 lat (msec): min=493, max=8742, avg=2724.11, stdev=2062.16 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 493], 5.00th=[ 531], 10.00th=[ 584], 20.00th=[ 760], 00:20:20.783 | 30.00th=[ 1435], 40.00th=[ 1703], 50.00th=[ 1871], 60.00th=[ 2467], 00:20:20.783 | 70.00th=[ 2836], 80.00th=[ 4732], 90.00th=[ 6007], 95.00th=[ 6007], 00:20:20.783 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8792], 99.95th=[ 8792], 00:20:20.783 | 99.99th=[ 8792] 00:20:20.783 bw ( KiB/s): min= 8192, max=194560, per=2.68%, avg=81399.75, stdev=75302.24, samples=8 00:20:20.783 iops : min= 8, max= 190, avg=79.38, stdev=73.65, samples=8 00:20:20.783 lat (msec) : 250=0.22%, 500=2.47%, 750=13.68%, 1000=4.71%, 2000=36.32% 00:20:20.783 lat (msec) : >=2000=42.60% 00:20:20.783 cpu : usr=0.00%, sys=1.29%, ctx=843, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:20.783 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525777: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=110, BW=111MiB/s (116MB/s)(1110MiB/10017msec) 00:20:20.783 slat (usec): min=50, max=1967.7k, avg=9006.58, stdev=85785.86 00:20:20.783 clat (msec): min=15, max=4538, avg=1000.44, stdev=1248.36 00:20:20.783 lat (msec): min=17, max=4540, avg=1009.44, stdev=1254.20 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 38], 5.00th=[ 180], 10.00th=[ 257], 20.00th=[ 259], 00:20:20.783 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 296], 60.00th=[ 489], 00:20:20.783 | 70.00th=[ 835], 80.00th=[ 1536], 90.00th=[ 3004], 95.00th=[ 4396], 00:20:20.783 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:20:20.783 | 99.99th=[ 4530] 00:20:20.783 bw ( KiB/s): min=10240, max=499712, per=6.01%, avg=182956.82, stdev=170556.15, samples=11 00:20:20.783 iops : min= 10, max= 488, avg=178.64, stdev=166.53, samples=11 00:20:20.783 lat (msec) : 20=0.27%, 50=1.26%, 100=1.35%, 250=4.86%, 500=52.34% 00:20:20.783 lat (msec) : 750=5.23%, 1000=9.55%, 2000=6.67%, >=2000=18.47% 00:20:20.783 cpu : usr=0.03%, sys=2.18%, ctx=1916, majf=0, minf=32447 00:20:20.783 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=1110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525778: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=74, BW=74.8MiB/s (78.4MB/s)(803MiB/10736msec) 00:20:20.783 slat (usec): min=73, max=2029.0k, avg=13219.52, stdev=108251.48 00:20:20.783 clat (msec): min=116, max=5544, avg=1632.78, stdev=1609.50 00:20:20.783 lat (msec): min=403, max=5550, avg=1646.00, stdev=1618.48 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 401], 5.00th=[ 414], 10.00th=[ 426], 20.00th=[ 439], 00:20:20.783 | 30.00th=[ 693], 40.00th=[ 776], 50.00th=[ 1028], 60.00th=[ 1334], 00:20:20.783 | 70.00th=[ 1586], 80.00th=[ 1737], 90.00th=[ 5134], 95.00th=[ 5403], 00:20:20.783 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:20:20.783 | 99.99th=[ 5537] 00:20:20.783 bw ( KiB/s): min= 4096, max=315392, per=3.50%, avg=106338.46, stdev=98067.13, samples=13 00:20:20.783 iops : min= 4, max= 308, avg=103.85, stdev=95.77, samples=13 00:20:20.783 lat (msec) : 250=0.12%, 500=23.16%, 750=11.33%, 1000=13.82%, 2000=34.62% 00:20:20.783 lat (msec) : >=2000=16.94% 00:20:20.783 cpu : usr=0.02%, sys=1.78%, ctx=2174, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525779: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=2, BW=2309KiB/s (2364kB/s)(24.0MiB/10645msec) 00:20:20.783 slat (msec): min=4, max=2130, avg=438.80, stdev=834.87 00:20:20.783 clat (msec): min=112, max=10604, avg=7042.49, stdev=3525.32 00:20:20.783 lat (msec): min=2154, max=10644, avg=7481.29, stdev=3271.56 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 113], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2198], 00:20:20.783 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 8658], 00:20:20.783 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:20:20.783 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.783 | 99.99th=[10671] 00:20:20.783 lat (msec) : 250=4.17%, >=2000=95.83% 00:20:20.783 cpu : usr=0.01%, sys=0.14%, ctx=67, majf=0, minf=6145 00:20:20.783 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.783 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525780: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=157, BW=158MiB/s (165MB/s)(1684MiB/10683msec) 00:20:20.783 slat (usec): min=46, max=2137.6k, avg=6271.23, stdev=72806.93 00:20:20.783 clat (msec): min=112, max=4675, avg=771.24, stdev=1080.38 00:20:20.783 lat (msec): min=221, max=4677, avg=777.51, stdev=1083.64 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 243], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 279], 00:20:20.783 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 418], 60.00th=[ 506], 00:20:20.783 | 70.00th=[ 634], 80.00th=[ 684], 90.00th=[ 802], 95.00th=[ 4530], 00:20:20.783 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:20:20.783 | 99.99th=[ 4665] 00:20:20.783 bw ( KiB/s): min= 6144, max=489472, per=8.06%, avg=245093.92, stdev=129431.62, samples=13 00:20:20.783 iops : min= 6, max= 478, avg=239.31, stdev=126.40, samples=13 00:20:20.783 lat (msec) : 250=1.31%, 500=58.49%, 750=27.85%, 1000=4.63%, >=2000=7.72% 00:20:20.783 cpu : usr=0.06%, sys=2.29%, ctx=1569, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.3% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525781: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=78, BW=78.2MiB/s (82.0MB/s)(829MiB/10603msec) 00:20:20.783 slat (usec): min=105, max=1977.8k, avg=12660.34, stdev=96467.20 00:20:20.783 clat (msec): min=103, max=3741, avg=1530.08, stdev=1042.39 00:20:20.783 lat (msec): min=489, max=3755, avg=1542.74, stdev=1043.40 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 489], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 667], 00:20:20.783 | 30.00th=[ 852], 40.00th=[ 1020], 50.00th=[ 1116], 60.00th=[ 1167], 00:20:20.783 | 70.00th=[ 2123], 80.00th=[ 2668], 90.00th=[ 3507], 95.00th=[ 3641], 00:20:20.783 | 99.00th=[ 3675], 99.50th=[ 3708], 99.90th=[ 3742], 99.95th=[ 3742], 00:20:20.783 | 99.99th=[ 3742] 00:20:20.783 bw ( KiB/s): min= 4096, max=256000, per=3.63%, avg=110434.46, stdev=74653.56, samples=13 00:20:20.783 iops : min= 4, max= 250, avg=107.85, stdev=72.90, samples=13 00:20:20.783 lat (msec) : 250=0.12%, 500=6.27%, 750=19.18%, 1000=12.91%, 2000=30.88% 00:20:20.783 lat (msec) : >=2000=30.64% 00:20:20.783 cpu : usr=0.06%, sys=1.25%, ctx=1539, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525782: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=165, BW=166MiB/s (174MB/s)(1755MiB/10590msec) 00:20:20.783 slat (usec): min=45, max=1998.1k, avg=5973.81, stdev=65024.32 00:20:20.783 clat (msec): min=100, max=2919, avg=591.72, stdev=572.83 00:20:20.783 lat (msec): min=126, max=2931, avg=597.69, stdev=577.46 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 129], 00:20:20.783 | 30.00th=[ 222], 40.00th=[ 376], 50.00th=[ 535], 60.00th=[ 584], 00:20:20.783 | 70.00th=[ 651], 80.00th=[ 793], 90.00th=[ 885], 95.00th=[ 2333], 00:20:20.783 | 99.00th=[ 2534], 99.50th=[ 2567], 99.90th=[ 2903], 99.95th=[ 2903], 00:20:20.783 | 99.99th=[ 2903] 00:20:20.783 bw ( KiB/s): min=20480, max=919552, per=9.13%, avg=277632.42, stdev=231937.25, samples=12 00:20:20.783 iops : min= 20, max= 898, avg=271.08, stdev=226.51, samples=12 00:20:20.783 lat (msec) : 250=33.45%, 500=12.25%, 750=33.56%, 1000=12.82%, 2000=0.46% 00:20:20.783 lat (msec) : >=2000=7.46% 00:20:20.783 cpu : usr=0.04%, sys=1.87%, ctx=2446, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=1755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525783: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=103, BW=104MiB/s (109MB/s)(1047MiB/10102msec) 00:20:20.783 slat (usec): min=50, max=1956.8k, avg=9602.56, stdev=61403.69 00:20:20.783 clat (msec): min=41, max=3673, avg=1167.89, stdev=877.69 00:20:20.783 lat (msec): min=115, max=3679, avg=1177.50, stdev=880.45 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 133], 5.00th=[ 380], 10.00th=[ 535], 20.00th=[ 743], 00:20:20.783 | 30.00th=[ 793], 40.00th=[ 852], 50.00th=[ 919], 60.00th=[ 969], 00:20:20.783 | 70.00th=[ 1053], 80.00th=[ 1200], 90.00th=[ 3071], 95.00th=[ 3608], 00:20:20.783 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:20:20.783 | 99.99th=[ 3675] 00:20:20.783 bw ( KiB/s): min=10240, max=256000, per=3.87%, avg=117632.00, stdev=65782.87, samples=16 00:20:20.783 iops : min= 10, max= 250, avg=114.88, stdev=64.24, samples=16 00:20:20.783 lat (msec) : 50=0.10%, 250=2.96%, 500=4.39%, 750=13.94%, 1000=42.79% 00:20:20.783 lat (msec) : 2000=23.69%, >=2000=12.13% 00:20:20.783 cpu : usr=0.06%, sys=1.86%, ctx=1528, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.783 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.783 issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.783 job4: (groupid=0, jobs=1): err= 0: pid=2525784: Mon Jul 15 13:50:45 2024 00:20:20.783 read: IOPS=197, BW=197MiB/s (207MB/s)(1980MiB/10034msec) 00:20:20.783 slat (usec): min=44, max=184057, avg=5047.90, stdev=9472.85 00:20:20.783 clat (msec): min=31, max=1247, avg=596.81, stdev=253.04 00:20:20.783 lat (msec): min=35, max=1249, avg=601.85, stdev=254.77 00:20:20.783 clat percentiles (msec): 00:20:20.783 | 1.00th=[ 73], 5.00th=[ 236], 10.00th=[ 330], 20.00th=[ 372], 00:20:20.783 | 30.00th=[ 426], 40.00th=[ 523], 50.00th=[ 575], 60.00th=[ 634], 00:20:20.783 | 70.00th=[ 718], 80.00th=[ 793], 90.00th=[ 911], 95.00th=[ 1133], 00:20:20.783 | 99.00th=[ 1217], 99.50th=[ 1234], 99.90th=[ 1250], 99.95th=[ 1250], 00:20:20.783 | 99.99th=[ 1250] 00:20:20.783 bw ( KiB/s): min=83968, max=385024, per=6.93%, avg=210830.22, stdev=94603.86, samples=18 00:20:20.783 iops : min= 82, max= 376, avg=205.89, stdev=92.39, samples=18 00:20:20.783 lat (msec) : 50=0.51%, 100=1.11%, 250=3.64%, 500=31.77%, 750=35.15% 00:20:20.783 lat (msec) : 1000=20.05%, 2000=7.78% 00:20:20.783 cpu : usr=0.09%, sys=2.27%, ctx=3928, majf=0, minf=32769 00:20:20.783 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:20:20.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.784 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job4: (groupid=0, jobs=1): err= 0: pid=2525785: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=18, BW=19.0MiB/s (19.9MB/s)(202MiB/10640msec) 00:20:20.784 slat (usec): min=138, max=2106.0k, avg=52110.39, stdev=286758.19 00:20:20.784 clat (msec): min=112, max=9776, avg=6233.52, stdev=3635.52 00:20:20.784 lat (msec): min=1238, max=9804, avg=6285.63, stdev=3613.54 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 1234], 5.00th=[ 1267], 10.00th=[ 1301], 20.00th=[ 1334], 00:20:20.784 | 30.00th=[ 1418], 40.00th=[ 5604], 50.00th=[ 8658], 60.00th=[ 8926], 00:20:20.784 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9731], 00:20:20.784 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:20:20.784 | 99.99th=[ 9731] 00:20:20.784 bw ( KiB/s): min= 4096, max=77824, per=0.83%, avg=25258.67, stdev=27860.83, samples=6 00:20:20.784 iops : min= 4, max= 76, avg=24.67, stdev=27.21, samples=6 00:20:20.784 lat (msec) : 250=0.50%, 2000=30.20%, >=2000=69.31% 00:20:20.784 cpu : usr=0.01%, sys=1.07%, ctx=453, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:20:20.784 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job4: (groupid=0, jobs=1): err= 0: pid=2525786: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(110MiB/10743msec) 00:20:20.784 slat (usec): min=551, max=2080.4k, avg=96590.16, stdev=413215.61 00:20:20.784 clat (msec): min=116, max=10740, avg=5504.59, stdev=4040.85 00:20:20.784 lat (msec): min=2026, max=10741, avg=5601.18, stdev=4037.87 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 2022], 5.00th=[ 2039], 10.00th=[ 2039], 20.00th=[ 2072], 00:20:20.784 | 30.00th=[ 2089], 40.00th=[ 2123], 50.00th=[ 2198], 60.00th=[ 6477], 00:20:20.784 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:20:20.784 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:20:20.784 | 99.99th=[10805] 00:20:20.784 lat (msec) : 250=0.91%, >=2000=99.09% 00:20:20.784 cpu : usr=0.00%, sys=0.77%, ctx=211, majf=0, minf=28161 00:20:20.784 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.3%, 16=14.5%, 32=29.1%, >=64=42.7% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.784 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525787: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=39, BW=39.1MiB/s (41.0MB/s)(392MiB/10018msec) 00:20:20.784 slat (usec): min=64, max=2058.9k, avg=25517.36, stdev=201293.64 00:20:20.784 clat (msec): min=13, max=8936, avg=1292.31, stdev=2312.21 00:20:20.784 lat (msec): min=18, max=8942, avg=1317.83, stdev=2344.55 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 24], 5.00th=[ 58], 10.00th=[ 101], 20.00th=[ 203], 00:20:20.784 | 30.00th=[ 363], 40.00th=[ 464], 50.00th=[ 518], 60.00th=[ 600], 00:20:20.784 | 70.00th=[ 659], 80.00th=[ 693], 90.00th=[ 5000], 95.00th=[ 8926], 00:20:20.784 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:20:20.784 | 99.99th=[ 8926] 00:20:20.784 bw ( KiB/s): min=32768, max=301056, per=5.95%, avg=180906.67, stdev=136316.41, samples=3 00:20:20.784 iops : min= 32, max= 294, avg=176.67, stdev=133.12, samples=3 00:20:20.784 lat (msec) : 20=0.51%, 50=3.83%, 100=6.12%, 250=12.76%, 500=22.45% 00:20:20.784 lat (msec) : 750=39.80%, 1000=0.77%, >=2000=13.78% 00:20:20.784 cpu : usr=0.00%, sys=1.10%, ctx=769, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:20:20.784 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525788: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=44, BW=44.3MiB/s (46.4MB/s)(447MiB/10091msec) 00:20:20.784 slat (usec): min=55, max=1965.0k, avg=22388.96, stdev=167490.01 00:20:20.784 clat (msec): min=81, max=6990, avg=1759.89, stdev=1896.97 00:20:20.784 lat (msec): min=165, max=6996, avg=1782.28, stdev=1914.90 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 167], 5.00th=[ 199], 10.00th=[ 313], 20.00th=[ 481], 00:20:20.784 | 30.00th=[ 617], 40.00th=[ 676], 50.00th=[ 751], 60.00th=[ 835], 00:20:20.784 | 70.00th=[ 2165], 80.00th=[ 2299], 90.00th=[ 5000], 95.00th=[ 6879], 00:20:20.784 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:20:20.784 | 99.99th=[ 7013] 00:20:20.784 bw ( KiB/s): min= 2048, max=217088, per=3.59%, avg=109169.33, stdev=85060.10, samples=6 00:20:20.784 iops : min= 2, max= 212, avg=106.50, stdev=82.97, samples=6 00:20:20.784 lat (msec) : 100=0.22%, 250=7.16%, 500=13.87%, 750=28.64%, 1000=10.29% 00:20:20.784 lat (msec) : >=2000=39.82% 00:20:20.784 cpu : usr=0.01%, sys=1.36%, ctx=538, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.9% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:20:20.784 issued rwts: total=447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525789: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=1, BW=1447KiB/s (1482kB/s)(15.0MiB/10615msec) 00:20:20.784 slat (usec): min=1647, max=2091.9k, avg=699443.45, stdev=995668.46 00:20:20.784 clat (msec): min=122, max=8668, avg=4930.13, stdev=2728.73 00:20:20.784 lat (msec): min=2211, max=10614, avg=5629.58, stdev=2753.01 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 124], 5.00th=[ 124], 10.00th=[ 2198], 20.00th=[ 2232], 00:20:20.784 | 30.00th=[ 2232], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 6477], 00:20:20.784 | 70.00th=[ 6477], 80.00th=[ 6544], 90.00th=[ 8658], 95.00th=[ 8658], 00:20:20.784 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:20:20.784 | 99.99th=[ 8658] 00:20:20.784 lat (msec) : 250=6.67%, >=2000=93.33% 00:20:20.784 cpu : usr=0.00%, sys=0.11%, ctx=54, majf=0, minf=3841 00:20:20.784 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525790: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=25, BW=25.3MiB/s (26.5MB/s)(271MiB/10732msec) 00:20:20.784 slat (usec): min=72, max=2058.3k, avg=39159.82, stdev=236279.95 00:20:20.784 clat (msec): min=117, max=10587, avg=4281.11, stdev=2489.43 00:20:20.784 lat (msec): min=1420, max=10590, avg=4320.27, stdev=2503.52 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 1418], 5.00th=[ 1502], 10.00th=[ 1603], 20.00th=[ 1770], 00:20:20.784 | 30.00th=[ 2022], 40.00th=[ 2601], 50.00th=[ 2668], 60.00th=[ 6409], 00:20:20.784 | 70.00th=[ 6477], 80.00th=[ 6544], 90.00th=[ 6611], 95.00th=[ 8020], 00:20:20.784 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:20.784 | 99.99th=[10537] 00:20:20.784 bw ( KiB/s): min= 6144, max=96256, per=1.60%, avg=48786.67, stdev=43166.35, samples=6 00:20:20.784 iops : min= 6, max= 94, avg=47.50, stdev=42.07, samples=6 00:20:20.784 lat (msec) : 250=0.37%, 2000=29.52%, >=2000=70.11% 00:20:20.784 cpu : usr=0.01%, sys=1.28%, ctx=504, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:20:20.784 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525791: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=3, BW=3668KiB/s (3756kB/s)(38.0MiB/10608msec) 00:20:20.784 slat (usec): min=663, max=2057.1k, avg=275867.09, stdev=671548.85 00:20:20.784 clat (msec): min=124, max=10598, avg=6997.92, stdev=2735.50 00:20:20.784 lat (msec): min=2181, max=10607, avg=7273.79, stdev=2545.60 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 125], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:20:20.784 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 8423], 60.00th=[ 8490], 00:20:20.784 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:20:20.784 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:20:20.784 | 99.99th=[10537] 00:20:20.784 lat (msec) : 250=2.63%, >=2000=97.37% 00:20:20.784 cpu : usr=0.00%, sys=0.28%, ctx=99, majf=0, minf=9729 00:20:20.784 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.784 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525792: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=5, BW=5397KiB/s (5527kB/s)(56.0MiB/10625msec) 00:20:20.784 slat (usec): min=601, max=2083.3k, avg=187894.45, stdev=575272.21 00:20:20.784 clat (msec): min=101, max=10623, avg=7182.35, stdev=3004.16 00:20:20.784 lat (msec): min=2135, max=10623, avg=7370.25, stdev=2879.73 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 103], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:20:20.784 | 30.00th=[ 6342], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:20:20.784 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:20:20.784 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.784 | 99.99th=[10671] 00:20:20.784 lat (msec) : 250=1.79%, >=2000=98.21% 00:20:20.784 cpu : usr=0.00%, sys=0.43%, ctx=74, majf=0, minf=14337 00:20:20.784 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.784 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525793: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=6, BW=6380KiB/s (6533kB/s)(67.0MiB/10753msec) 00:20:20.784 slat (usec): min=850, max=2080.1k, avg=158609.12, stdev=533580.31 00:20:20.784 clat (msec): min=125, max=10751, avg=8900.02, stdev=3173.10 00:20:20.784 lat (msec): min=2131, max=10752, avg=9058.63, stdev=2988.07 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 127], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 6477], 00:20:20.784 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:20:20.784 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:20:20.784 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:20:20.784 | 99.99th=[10805] 00:20:20.784 lat (msec) : 250=1.49%, >=2000=98.51% 00:20:20.784 cpu : usr=0.00%, sys=0.68%, ctx=103, majf=0, minf=17153 00:20:20.784 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:20:20.784 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525794: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=70, BW=70.6MiB/s (74.0MB/s)(708MiB/10035msec) 00:20:20.784 slat (usec): min=40, max=2063.4k, avg=14122.93, stdev=118157.67 00:20:20.784 clat (msec): min=32, max=6258, avg=1741.67, stdev=1909.98 00:20:20.784 lat (msec): min=56, max=6272, avg=1755.79, stdev=1919.24 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 64], 5.00th=[ 222], 10.00th=[ 393], 20.00th=[ 575], 00:20:20.784 | 30.00th=[ 609], 40.00th=[ 684], 50.00th=[ 869], 60.00th=[ 1351], 00:20:20.784 | 70.00th=[ 1401], 80.00th=[ 2836], 90.00th=[ 5940], 95.00th=[ 6074], 00:20:20.784 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6275], 99.95th=[ 6275], 00:20:20.784 | 99.99th=[ 6275] 00:20:20.784 bw ( KiB/s): min= 6144, max=223232, per=3.26%, avg=99157.33, stdev=72299.48, samples=12 00:20:20.784 iops : min= 6, max= 218, avg=96.83, stdev=70.60, samples=12 00:20:20.784 lat (msec) : 50=0.14%, 100=2.12%, 250=4.38%, 500=6.64%, 750=33.76% 00:20:20.784 lat (msec) : 1000=10.45%, 2000=20.34%, >=2000=22.18% 00:20:20.784 cpu : usr=0.03%, sys=1.29%, ctx=973, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.784 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525795: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=3, BW=3821KiB/s (3913kB/s)(40.0MiB/10719msec) 00:20:20.784 slat (usec): min=533, max=2070.9k, avg=265009.93, stdev=670007.87 00:20:20.784 clat (msec): min=117, max=10715, avg=7098.94, stdev=3293.87 00:20:20.784 lat (msec): min=2142, max=10718, avg=7363.95, stdev=3140.65 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 118], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 4279], 00:20:20.784 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:20:20.784 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:20:20.784 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.784 | 99.99th=[10671] 00:20:20.784 lat (msec) : 250=2.50%, >=2000=97.50% 00:20:20.784 cpu : usr=0.00%, sys=0.35%, ctx=99, majf=0, minf=10241 00:20:20.784 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.784 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525796: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=217, BW=218MiB/s (228MB/s)(2316MiB/10635msec) 00:20:20.784 slat (usec): min=60, max=2049.1k, avg=4534.91, stdev=70814.58 00:20:20.784 clat (msec): min=118, max=4373, avg=395.63, stdev=633.14 00:20:20.784 lat (msec): min=119, max=4387, avg=400.16, stdev=639.86 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 130], 20.00th=[ 131], 00:20:20.784 | 30.00th=[ 131], 40.00th=[ 132], 50.00th=[ 245], 60.00th=[ 253], 00:20:20.784 | 70.00th=[ 266], 80.00th=[ 313], 90.00th=[ 676], 95.00th=[ 2333], 00:20:20.784 | 99.00th=[ 2500], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:20:20.784 | 99.99th=[ 4396] 00:20:20.784 bw ( KiB/s): min=159744, max=1001472, per=16.37%, avg=497891.56, stdev=311205.79, samples=9 00:20:20.784 iops : min= 156, max= 978, avg=486.22, stdev=303.91, samples=9 00:20:20.784 lat (msec) : 250=56.00%, 500=31.26%, 750=3.89%, 1000=1.99%, >=2000=6.87% 00:20:20.784 cpu : usr=0.06%, sys=3.00%, ctx=1976, majf=0, minf=32769 00:20:20.784 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:20.784 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525797: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=2, BW=2699KiB/s (2764kB/s)(28.0MiB/10622msec) 00:20:20.784 slat (usec): min=601, max=2103.4k, avg=375045.33, stdev=796689.46 00:20:20.784 clat (msec): min=119, max=10618, avg=6387.66, stdev=2666.39 00:20:20.784 lat (msec): min=2154, max=10621, avg=6762.71, stdev=2484.47 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 121], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:20:20.784 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 6477], 00:20:20.784 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10671], 00:20:20.784 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:20:20.784 | 99.99th=[10671] 00:20:20.784 lat (msec) : 250=3.57%, >=2000=96.43% 00:20:20.784 cpu : usr=0.00%, sys=0.18%, ctx=68, majf=0, minf=7169 00:20:20.784 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:20:20.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.784 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:20:20.784 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.784 job5: (groupid=0, jobs=1): err= 0: pid=2525798: Mon Jul 15 13:50:45 2024 00:20:20.784 read: IOPS=5, BW=5621KiB/s (5756kB/s)(59.0MiB/10749msec) 00:20:20.784 slat (usec): min=1089, max=2064.7k, avg=180099.24, stdev=564989.51 00:20:20.784 clat (msec): min=122, max=10745, avg=8907.87, stdev=3041.34 00:20:20.784 lat (msec): min=2161, max=10748, avg=9087.97, stdev=2818.59 00:20:20.784 clat percentiles (msec): 00:20:20.784 | 1.00th=[ 123], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 6477], 00:20:20.784 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:20:20.784 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:20:20.784 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:20:20.784 | 99.99th=[10805] 00:20:20.784 lat (msec) : 250=1.69%, >=2000=98.31% 00:20:20.785 cpu : usr=0.00%, sys=0.60%, ctx=105, majf=0, minf=15105 00:20:20.785 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:20:20.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.785 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:20:20.785 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.785 job5: (groupid=0, jobs=1): err= 0: pid=2525799: Mon Jul 15 13:50:45 2024 00:20:20.785 read: IOPS=58, BW=58.7MiB/s (61.6MB/s)(622MiB/10593msec) 00:20:20.785 slat (usec): min=35, max=1984.2k, avg=16939.87, stdev=145987.14 00:20:20.785 clat (msec): min=53, max=6351, avg=1530.01, stdev=1390.77 00:20:20.785 lat (msec): min=386, max=6362, avg=1546.95, stdev=1402.44 00:20:20.785 clat percentiles (msec): 00:20:20.785 | 1.00th=[ 397], 5.00th=[ 451], 10.00th=[ 464], 20.00th=[ 535], 00:20:20.785 | 30.00th=[ 550], 40.00th=[ 575], 50.00th=[ 617], 60.00th=[ 2106], 00:20:20.785 | 70.00th=[ 2232], 80.00th=[ 2433], 90.00th=[ 2601], 95.00th=[ 4463], 00:20:20.785 | 99.00th=[ 6275], 99.50th=[ 6342], 99.90th=[ 6342], 99.95th=[ 6342], 00:20:20.785 | 99.99th=[ 6342] 00:20:20.785 bw ( KiB/s): min= 2048, max=239616, per=4.75%, avg=144530.29, stdev=114559.06, samples=7 00:20:20.785 iops : min= 2, max= 234, avg=141.14, stdev=111.87, samples=7 00:20:20.785 lat (msec) : 100=0.16%, 500=12.86%, 750=45.02%, >=2000=41.96% 00:20:20.785 cpu : usr=0.00%, sys=1.26%, ctx=574, majf=0, minf=32769 00:20:20.785 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:20:20.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.785 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:20:20.785 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:20.785 00:20:20.785 Run status group 0 (all jobs): 00:20:20.785 READ: bw=2970MiB/s (3115MB/s), 1378KiB/s-218MiB/s (1411kB/s-228MB/s), io=37.1GiB (39.9GB), run=10016-12801msec 00:20:20.785 00:20:20.785 Disk stats (read/write): 00:20:20.785 nvme0n1: ios=36827/0, merge=0/0, ticks=9569978/0, in_queue=9569978, util=98.66% 00:20:20.785 nvme1n1: ios=66275/0, merge=0/0, ticks=8702099/0, in_queue=8702099, util=98.42% 00:20:20.785 nvme2n1: ios=24748/0, merge=0/0, ticks=7627031/0, in_queue=7627031, util=98.76% 00:20:20.785 nvme3n1: ios=40789/0, merge=0/0, ticks=7844826/0, in_queue=7844826, util=99.01% 00:20:20.785 nvme4n1: ios=94241/0, merge=0/0, ticks=7075221/0, in_queue=7075221, util=99.02% 00:20:20.785 nvme5n1: ios=39934/0, merge=0/0, ticks=7804321/0, in_queue=7804321, util=99.13% 00:20:20.785 13:50:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:20:20.785 13:50:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:20:20.785 13:50:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:20.785 13:50:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:20:20.785 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:20.785 13:50:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:21.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:21.350 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:20:21.350 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:21.350 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:21.350 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:21.351 13:50:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:22.319 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:22.319 13:50:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:23.271 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:20:23.271 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:23.528 13:50:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:24.459 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:20:24.459 13:50:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:25.391 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:25.391 rmmod nvme_rdma 00:20:25.391 rmmod nvme_fabrics 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2524400 ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2524400 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 2524400 ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 2524400 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2524400 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2524400' 00:20:25.391 killing process with pid 2524400 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 2524400 00:20:25.391 13:50:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 2524400 00:20:25.958 13:50:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.958 13:50:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:25.958 00:20:25.958 real 0m34.784s 00:20:25.958 user 1m53.684s 00:20:25.958 sys 0m16.392s 00:20:25.958 13:50:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.958 13:50:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 ************************************ 00:20:25.958 END TEST nvmf_srq_overwhelm 00:20:25.958 ************************************ 00:20:25.958 13:50:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:25.958 13:50:52 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:25.958 13:50:52 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:25.958 13:50:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.958 13:50:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 ************************************ 00:20:25.958 START TEST nvmf_shutdown 00:20:25.958 ************************************ 00:20:25.958 13:50:52 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:20:26.217 * Looking for test storage... 00:20:26.217 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:26.217 13:50:52 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:26.218 ************************************ 00:20:26.218 START TEST nvmf_shutdown_tc1 00:20:26.218 ************************************ 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.218 13:50:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.787 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:32.788 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:32.788 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:32.788 Found net devices under 0000:18:00.0: mlx_0_0 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:32.788 Found net devices under 0000:18:00.1: mlx_0_1 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.788 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:33.048 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:33.048 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:33.048 altname enp24s0f0np0 00:20:33.048 altname ens785f0np0 00:20:33.048 inet 192.168.100.8/24 scope global mlx_0_0 00:20:33.048 valid_lft forever preferred_lft forever 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:33.048 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:33.048 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:33.048 altname enp24s0f1np1 00:20:33.048 altname ens785f1np1 00:20:33.048 inet 192.168.100.9/24 scope global mlx_0_1 00:20:33.048 valid_lft forever preferred_lft forever 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:33.048 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:33.049 192.168.100.9' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:33.049 192.168.100.9' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:33.049 192.168.100.9' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2531241 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2531241 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2531241 ']' 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.049 13:50:59 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.049 [2024-07-15 13:50:59.533012] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:33.049 [2024-07-15 13:50:59.533083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.307 [2024-07-15 13:50:59.621334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.307 [2024-07-15 13:50:59.714471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.307 [2024-07-15 13:50:59.714510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.307 [2024-07-15 13:50:59.714525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.307 [2024-07-15 13:50:59.714535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.307 [2024-07-15 13:50:59.714545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.307 [2024-07-15 13:50:59.714673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.307 [2024-07-15 13:50:59.717404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.307 [2024-07-15 13:50:59.717509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.307 [2024-07-15 13:50:59.717509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:33.871 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.871 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:33.871 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.871 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.871 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.129 [2024-07-15 13:51:00.439063] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1610480/0x1614970) succeed. 00:20:34.129 [2024-07-15 13:51:00.448613] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1611ac0/0x1656000) succeed. 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.129 13:51:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.387 Malloc1 00:20:34.387 [2024-07-15 13:51:00.679616] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:34.387 Malloc2 00:20:34.387 Malloc3 00:20:34.387 Malloc4 00:20:34.387 Malloc5 00:20:34.387 Malloc6 00:20:34.645 Malloc7 00:20:34.645 Malloc8 00:20:34.645 Malloc9 00:20:34.645 Malloc10 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2531619 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2531619 /var/tmp/bdevperf.sock 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2531619 ']' 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.645 { 00:20:34.645 "params": { 00:20:34.645 "name": "Nvme$subsystem", 00:20:34.645 "trtype": "$TEST_TRANSPORT", 00:20:34.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.645 "adrfam": "ipv4", 00:20:34.645 "trsvcid": "$NVMF_PORT", 00:20:34.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.645 "hdgst": ${hdgst:-false}, 00:20:34.645 "ddgst": ${ddgst:-false} 00:20:34.645 }, 00:20:34.645 "method": "bdev_nvme_attach_controller" 00:20:34.645 } 00:20:34.645 EOF 00:20:34.645 )") 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.645 { 00:20:34.645 "params": { 00:20:34.645 "name": "Nvme$subsystem", 00:20:34.645 "trtype": "$TEST_TRANSPORT", 00:20:34.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.645 "adrfam": "ipv4", 00:20:34.645 "trsvcid": "$NVMF_PORT", 00:20:34.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.645 "hdgst": ${hdgst:-false}, 00:20:34.645 "ddgst": ${ddgst:-false} 00:20:34.645 }, 00:20:34.645 "method": "bdev_nvme_attach_controller" 00:20:34.645 } 00:20:34.645 EOF 00:20:34.645 )") 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.645 { 00:20:34.645 "params": { 00:20:34.645 "name": "Nvme$subsystem", 00:20:34.645 "trtype": "$TEST_TRANSPORT", 00:20:34.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.645 "adrfam": "ipv4", 00:20:34.645 "trsvcid": "$NVMF_PORT", 00:20:34.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.645 "hdgst": ${hdgst:-false}, 00:20:34.645 "ddgst": ${ddgst:-false} 00:20:34.645 }, 00:20:34.645 "method": "bdev_nvme_attach_controller" 00:20:34.645 } 00:20:34.645 EOF 00:20:34.645 )") 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.645 { 00:20:34.645 "params": { 00:20:34.645 "name": "Nvme$subsystem", 00:20:34.645 "trtype": "$TEST_TRANSPORT", 00:20:34.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.645 "adrfam": "ipv4", 00:20:34.645 "trsvcid": "$NVMF_PORT", 00:20:34.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.645 "hdgst": ${hdgst:-false}, 00:20:34.645 "ddgst": ${ddgst:-false} 00:20:34.645 }, 00:20:34.645 "method": "bdev_nvme_attach_controller" 00:20:34.645 } 00:20:34.645 EOF 00:20:34.645 )") 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.645 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.645 { 00:20:34.645 "params": { 00:20:34.646 "name": "Nvme$subsystem", 00:20:34.646 "trtype": "$TEST_TRANSPORT", 00:20:34.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.646 "adrfam": "ipv4", 00:20:34.646 "trsvcid": "$NVMF_PORT", 00:20:34.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.646 "hdgst": ${hdgst:-false}, 00:20:34.646 "ddgst": ${ddgst:-false} 00:20:34.646 }, 00:20:34.646 "method": "bdev_nvme_attach_controller" 00:20:34.646 } 00:20:34.646 EOF 00:20:34.646 )") 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.646 { 00:20:34.646 "params": { 00:20:34.646 "name": "Nvme$subsystem", 00:20:34.646 "trtype": "$TEST_TRANSPORT", 00:20:34.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.646 "adrfam": "ipv4", 00:20:34.646 "trsvcid": "$NVMF_PORT", 00:20:34.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.646 "hdgst": ${hdgst:-false}, 00:20:34.646 "ddgst": ${ddgst:-false} 00:20:34.646 }, 00:20:34.646 "method": "bdev_nvme_attach_controller" 00:20:34.646 } 00:20:34.646 EOF 00:20:34.646 )") 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.646 [2024-07-15 13:51:01.166107] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:34.646 [2024-07-15 13:51:01.166164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.646 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.646 { 00:20:34.646 "params": { 00:20:34.646 "name": "Nvme$subsystem", 00:20:34.646 "trtype": "$TEST_TRANSPORT", 00:20:34.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.646 "adrfam": "ipv4", 00:20:34.646 "trsvcid": "$NVMF_PORT", 00:20:34.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.646 "hdgst": ${hdgst:-false}, 00:20:34.646 "ddgst": ${ddgst:-false} 00:20:34.646 }, 00:20:34.646 "method": "bdev_nvme_attach_controller" 00:20:34.646 } 00:20:34.646 EOF 00:20:34.646 )") 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.904 { 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme$subsystem", 00:20:34.904 "trtype": "$TEST_TRANSPORT", 00:20:34.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "$NVMF_PORT", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.904 "hdgst": ${hdgst:-false}, 00:20:34.904 "ddgst": ${ddgst:-false} 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 } 00:20:34.904 EOF 00:20:34.904 )") 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.904 { 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme$subsystem", 00:20:34.904 "trtype": "$TEST_TRANSPORT", 00:20:34.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "$NVMF_PORT", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.904 "hdgst": ${hdgst:-false}, 00:20:34.904 "ddgst": ${ddgst:-false} 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 } 00:20:34.904 EOF 00:20:34.904 )") 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.904 { 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme$subsystem", 00:20:34.904 "trtype": "$TEST_TRANSPORT", 00:20:34.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "$NVMF_PORT", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.904 "hdgst": ${hdgst:-false}, 00:20:34.904 "ddgst": ${ddgst:-false} 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 } 00:20:34.904 EOF 00:20:34.904 )") 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:34.904 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:34.904 13:51:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme1", 00:20:34.904 "trtype": "rdma", 00:20:34.904 "traddr": "192.168.100.8", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "4420", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.904 "hdgst": false, 00:20:34.904 "ddgst": false 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 },{ 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme2", 00:20:34.904 "trtype": "rdma", 00:20:34.904 "traddr": "192.168.100.8", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "4420", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.904 "hdgst": false, 00:20:34.904 "ddgst": false 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 },{ 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme3", 00:20:34.904 "trtype": "rdma", 00:20:34.904 "traddr": "192.168.100.8", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "4420", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:34.904 "hdgst": false, 00:20:34.904 "ddgst": false 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 },{ 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme4", 00:20:34.904 "trtype": "rdma", 00:20:34.904 "traddr": "192.168.100.8", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "4420", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:34.904 "hdgst": false, 00:20:34.904 "ddgst": false 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 },{ 00:20:34.904 "params": { 00:20:34.904 "name": "Nvme5", 00:20:34.904 "trtype": "rdma", 00:20:34.904 "traddr": "192.168.100.8", 00:20:34.904 "adrfam": "ipv4", 00:20:34.904 "trsvcid": "4420", 00:20:34.904 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:34.904 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:34.904 "hdgst": false, 00:20:34.904 "ddgst": false 00:20:34.904 }, 00:20:34.904 "method": "bdev_nvme_attach_controller" 00:20:34.904 },{ 00:20:34.905 "params": { 00:20:34.905 "name": "Nvme6", 00:20:34.905 "trtype": "rdma", 00:20:34.905 "traddr": "192.168.100.8", 00:20:34.905 "adrfam": "ipv4", 00:20:34.905 "trsvcid": "4420", 00:20:34.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:34.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:34.905 "hdgst": false, 00:20:34.905 "ddgst": false 00:20:34.905 }, 00:20:34.905 "method": "bdev_nvme_attach_controller" 00:20:34.905 },{ 00:20:34.905 "params": { 00:20:34.905 "name": "Nvme7", 00:20:34.905 "trtype": "rdma", 00:20:34.905 "traddr": "192.168.100.8", 00:20:34.905 "adrfam": "ipv4", 00:20:34.905 "trsvcid": "4420", 00:20:34.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:34.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:34.905 "hdgst": false, 00:20:34.905 "ddgst": false 00:20:34.905 }, 00:20:34.905 "method": "bdev_nvme_attach_controller" 00:20:34.905 },{ 00:20:34.905 "params": { 00:20:34.905 "name": "Nvme8", 00:20:34.905 "trtype": "rdma", 00:20:34.905 "traddr": "192.168.100.8", 00:20:34.905 "adrfam": "ipv4", 00:20:34.905 "trsvcid": "4420", 00:20:34.905 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:34.905 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:34.905 "hdgst": false, 00:20:34.905 "ddgst": false 00:20:34.905 }, 00:20:34.905 "method": "bdev_nvme_attach_controller" 00:20:34.905 },{ 00:20:34.905 "params": { 00:20:34.905 "name": "Nvme9", 00:20:34.905 "trtype": "rdma", 00:20:34.905 "traddr": "192.168.100.8", 00:20:34.905 "adrfam": "ipv4", 00:20:34.905 "trsvcid": "4420", 00:20:34.905 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:34.905 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:34.905 "hdgst": false, 00:20:34.905 "ddgst": false 00:20:34.905 }, 00:20:34.905 "method": "bdev_nvme_attach_controller" 00:20:34.905 },{ 00:20:34.905 "params": { 00:20:34.905 "name": "Nvme10", 00:20:34.905 "trtype": "rdma", 00:20:34.905 "traddr": "192.168.100.8", 00:20:34.905 "adrfam": "ipv4", 00:20:34.905 "trsvcid": "4420", 00:20:34.905 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:34.905 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:34.905 "hdgst": false, 00:20:34.905 "ddgst": false 00:20:34.905 }, 00:20:34.905 "method": "bdev_nvme_attach_controller" 00:20:34.905 }' 00:20:34.905 [2024-07-15 13:51:01.256420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.905 [2024-07-15 13:51:01.338680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2531619 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:35.838 13:51:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:36.787 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2531619 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2531241 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 [2024-07-15 13:51:03.265236] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:36.787 [2024-07-15 13:51:03.265303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531887 ] 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.787 EOF 00:20:36.787 )") 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.787 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.787 { 00:20:36.787 "params": { 00:20:36.787 "name": "Nvme$subsystem", 00:20:36.787 "trtype": "$TEST_TRANSPORT", 00:20:36.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.787 "adrfam": "ipv4", 00:20:36.787 "trsvcid": "$NVMF_PORT", 00:20:36.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.787 "hdgst": ${hdgst:-false}, 00:20:36.787 "ddgst": ${ddgst:-false} 00:20:36.787 }, 00:20:36.787 "method": "bdev_nvme_attach_controller" 00:20:36.787 } 00:20:36.788 EOF 00:20:36.788 )") 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.788 { 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme$subsystem", 00:20:36.788 "trtype": "$TEST_TRANSPORT", 00:20:36.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "$NVMF_PORT", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.788 "hdgst": ${hdgst:-false}, 00:20:36.788 "ddgst": ${ddgst:-false} 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 } 00:20:36.788 EOF 00:20:36.788 )") 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:36.788 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:36.788 13:51:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme1", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme2", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme3", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme4", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme5", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme6", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme7", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme8", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme9", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 },{ 00:20:36.788 "params": { 00:20:36.788 "name": "Nvme10", 00:20:36.788 "trtype": "rdma", 00:20:36.788 "traddr": "192.168.100.8", 00:20:36.788 "adrfam": "ipv4", 00:20:36.788 "trsvcid": "4420", 00:20:36.788 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.788 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.788 "hdgst": false, 00:20:36.788 "ddgst": false 00:20:36.788 }, 00:20:36.788 "method": "bdev_nvme_attach_controller" 00:20:36.788 }' 00:20:37.046 [2024-07-15 13:51:03.355368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.046 [2024-07-15 13:51:03.438372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.979 Running I/O for 1 seconds... 00:20:39.352 00:20:39.352 Latency(us) 00:20:39.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.352 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme1n1 : 1.17 342.03 21.38 0.00 0.00 181753.55 25986.45 213362.42 00:20:39.352 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme2n1 : 1.17 355.35 22.21 0.00 0.00 173445.74 26214.40 203332.56 00:20:39.352 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme3n1 : 1.17 382.28 23.89 0.00 0.00 159149.70 5983.72 144977.03 00:20:39.352 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme4n1 : 1.17 381.86 23.87 0.00 0.00 157214.21 10827.69 137682.59 00:20:39.352 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme5n1 : 1.18 392.70 24.54 0.00 0.00 151828.83 6610.59 126740.93 00:20:39.352 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme6n1 : 1.18 392.39 24.52 0.00 0.00 149855.81 6553.60 116255.17 00:20:39.352 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme7n1 : 1.18 392.93 24.56 0.00 0.00 147548.25 6496.61 105313.50 00:20:39.352 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme8n1 : 1.18 402.70 25.17 0.00 0.00 142045.47 6610.59 98930.87 00:20:39.352 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme9n1 : 1.18 380.04 23.75 0.00 0.00 148752.85 10029.86 93460.03 00:20:39.352 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.352 Verification LBA range: start 0x0 length 0x400 00:20:39.352 Nvme10n1 : 1.18 325.28 20.33 0.00 0.00 171102.09 11055.64 217921.45 00:20:39.352 =================================================================================================================== 00:20:39.352 Total : 3747.56 234.22 0.00 0.00 157516.46 5983.72 217921.45 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:39.352 rmmod nvme_rdma 00:20:39.352 rmmod nvme_fabrics 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:39.352 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2531241 ']' 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2531241 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2531241 ']' 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2531241 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.353 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2531241 00:20:39.610 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.610 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.610 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2531241' 00:20:39.610 killing process with pid 2531241 00:20:39.610 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2531241 00:20:39.610 13:51:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2531241 00:20:39.868 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.868 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:39.868 00:20:39.868 real 0m13.806s 00:20:39.868 user 0m31.442s 00:20:39.868 sys 0m6.496s 00:20:39.868 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.868 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.868 ************************************ 00:20:39.868 END TEST nvmf_shutdown_tc1 00:20:39.868 ************************************ 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:40.128 ************************************ 00:20:40.128 START TEST nvmf_shutdown_tc2 00:20:40.128 ************************************ 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:40.128 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:40.128 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:40.128 Found net devices under 0000:18:00.0: mlx_0_0 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:40.128 Found net devices under 0000:18:00.1: mlx_0_1 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:40.128 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:40.129 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.129 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:40.129 altname enp24s0f0np0 00:20:40.129 altname ens785f0np0 00:20:40.129 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.129 valid_lft forever preferred_lft forever 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:40.129 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.129 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:40.129 altname enp24s0f1np1 00:20:40.129 altname ens785f1np1 00:20:40.129 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.129 valid_lft forever preferred_lft forever 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.129 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.388 192.168.100.9' 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:40.388 192.168.100.9' 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:40.388 192.168.100.9' 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:20:40.388 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2532899 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2532899 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2532899 ']' 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.389 13:51:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.389 [2024-07-15 13:51:06.813807] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:40.389 [2024-07-15 13:51:06.813868] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.389 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.389 [2024-07-15 13:51:06.901920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.646 [2024-07-15 13:51:06.992651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.646 [2024-07-15 13:51:06.992695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.646 [2024-07-15 13:51:06.992710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.646 [2024-07-15 13:51:06.992721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.646 [2024-07-15 13:51:06.992730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.646 [2024-07-15 13:51:06.992849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.646 [2024-07-15 13:51:06.992952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.646 [2024-07-15 13:51:06.993054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.646 [2024-07-15 13:51:06.993055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.210 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.210 [2024-07-15 13:51:07.704326] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19d6480/0x19da970) succeed. 00:20:41.210 [2024-07-15 13:51:07.713922] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19d7ac0/0x1a1c000) succeed. 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.467 13:51:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.467 Malloc1 00:20:41.467 [2024-07-15 13:51:07.953401] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:41.467 Malloc2 00:20:41.724 Malloc3 00:20:41.724 Malloc4 00:20:41.724 Malloc5 00:20:41.724 Malloc6 00:20:41.724 Malloc7 00:20:41.981 Malloc8 00:20:41.981 Malloc9 00:20:41.981 Malloc10 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2533134 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2533134 /var/tmp/bdevperf.sock 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2533134 ']' 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.981 { 00:20:41.981 "params": { 00:20:41.981 "name": "Nvme$subsystem", 00:20:41.981 "trtype": "$TEST_TRANSPORT", 00:20:41.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.981 "adrfam": "ipv4", 00:20:41.981 "trsvcid": "$NVMF_PORT", 00:20:41.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.981 "hdgst": ${hdgst:-false}, 00:20:41.981 "ddgst": ${ddgst:-false} 00:20:41.981 }, 00:20:41.981 "method": "bdev_nvme_attach_controller" 00:20:41.981 } 00:20:41.981 EOF 00:20:41.981 )") 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.981 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 [2024-07-15 13:51:08.468787] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:41.982 [2024-07-15 13:51:08.468851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533134 ] 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.982 { 00:20:41.982 "params": { 00:20:41.982 "name": "Nvme$subsystem", 00:20:41.982 "trtype": "$TEST_TRANSPORT", 00:20:41.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.982 "adrfam": "ipv4", 00:20:41.982 "trsvcid": "$NVMF_PORT", 00:20:41.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.982 "hdgst": ${hdgst:-false}, 00:20:41.982 "ddgst": ${ddgst:-false} 00:20:41.982 }, 00:20:41.982 "method": "bdev_nvme_attach_controller" 00:20:41.982 } 00:20:41.982 EOF 00:20:41.982 )") 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:41.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.982 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:42.239 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:42.239 13:51:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme1", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme2", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme3", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme4", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme5", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme6", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme7", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme8", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme9", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 },{ 00:20:42.239 "params": { 00:20:42.239 "name": "Nvme10", 00:20:42.239 "trtype": "rdma", 00:20:42.239 "traddr": "192.168.100.8", 00:20:42.239 "adrfam": "ipv4", 00:20:42.239 "trsvcid": "4420", 00:20:42.239 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:42.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:42.239 "hdgst": false, 00:20:42.239 "ddgst": false 00:20:42.239 }, 00:20:42.239 "method": "bdev_nvme_attach_controller" 00:20:42.239 }' 00:20:42.239 [2024-07-15 13:51:08.558645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.239 [2024-07-15 13:51:08.640856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.169 Running I/O for 10 seconds... 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.169 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:43.427 13:51:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=147 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2533134 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2533134 ']' 00:20:43.684 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2533134 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533134 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533134' 00:20:43.942 killing process with pid 2533134 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2533134 00:20:43.942 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2533134 00:20:43.942 Received shutdown signal, test time was about 0.813924 seconds 00:20:43.942 00:20:43.942 Latency(us) 00:20:43.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.942 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme1n1 : 0.80 341.07 21.32 0.00 0.00 183167.04 7522.39 201508.95 00:20:43.942 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme2n1 : 0.80 360.57 22.54 0.00 0.00 169925.82 7693.36 189655.49 00:20:43.942 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme3n1 : 0.80 368.75 23.05 0.00 0.00 162818.76 4559.03 178713.82 00:20:43.942 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme4n1 : 0.80 400.50 25.03 0.00 0.00 147021.27 4729.99 131299.95 00:20:43.942 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme5n1 : 0.80 398.51 24.91 0.00 0.00 144807.85 8890.10 124005.51 00:20:43.942 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme6n1 : 0.80 397.63 24.85 0.00 0.00 142552.55 10257.81 112152.04 00:20:43.942 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme7n1 : 0.81 396.98 24.81 0.00 0.00 138899.72 10770.70 103033.99 00:20:43.942 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme8n1 : 0.81 396.20 24.76 0.00 0.00 136635.08 11511.54 95283.65 00:20:43.942 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme9n1 : 0.81 395.38 24.71 0.00 0.00 134169.11 12423.35 102578.09 00:20:43.942 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.942 Verification LBA range: start 0x0 length 0x400 00:20:43.942 Nvme10n1 : 0.81 236.08 14.76 0.00 0.00 218901.33 3105.84 357427.65 00:20:43.942 =================================================================================================================== 00:20:43.942 Total : 3691.68 230.73 0.00 0.00 154674.89 3105.84 357427.65 00:20:44.200 13:51:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:45.137 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2532899 00:20:45.138 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:45.138 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:45.396 rmmod nvme_rdma 00:20:45.396 rmmod nvme_fabrics 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2532899 ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2532899 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2532899 ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2532899 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2532899 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2532899' 00:20:45.396 killing process with pid 2532899 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2532899 00:20:45.396 13:51:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2532899 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:45.966 00:20:45.966 real 0m5.826s 00:20:45.966 user 0m23.116s 00:20:45.966 sys 0m1.296s 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.966 ************************************ 00:20:45.966 END TEST nvmf_shutdown_tc2 00:20:45.966 ************************************ 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:45.966 ************************************ 00:20:45.966 START TEST nvmf_shutdown_tc3 00:20:45.966 ************************************ 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.966 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:45.967 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:45.967 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:45.967 Found net devices under 0000:18:00.0: mlx_0_0 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:45.967 Found net devices under 0000:18:00.1: mlx_0_1 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:45.967 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:46.295 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.295 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:46.295 altname enp24s0f0np0 00:20:46.295 altname ens785f0np0 00:20:46.295 inet 192.168.100.8/24 scope global mlx_0_0 00:20:46.295 valid_lft forever preferred_lft forever 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:46.295 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.295 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:46.295 altname enp24s0f1np1 00:20:46.295 altname ens785f1np1 00:20:46.295 inet 192.168.100.9/24 scope global mlx_0_1 00:20:46.295 valid_lft forever preferred_lft forever 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:46.295 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:46.296 192.168.100.9' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:46.296 192.168.100.9' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:46.296 192.168.100.9' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2533798 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2533798 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2533798 ']' 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.296 13:51:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.296 [2024-07-15 13:51:12.724452] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:46.296 [2024-07-15 13:51:12.724516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.296 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.574 [2024-07-15 13:51:12.811562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.574 [2024-07-15 13:51:12.899145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.574 [2024-07-15 13:51:12.899189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.574 [2024-07-15 13:51:12.899206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.574 [2024-07-15 13:51:12.899216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.574 [2024-07-15 13:51:12.899225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.574 [2024-07-15 13:51:12.899347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.574 [2024-07-15 13:51:12.899452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.574 [2024-07-15 13:51:12.899555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.574 [2024-07-15 13:51:12.899556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.146 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.146 [2024-07-15 13:51:13.631503] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x84f480/0x853970) succeed. 00:20:47.146 [2024-07-15 13:51:13.641086] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x850ac0/0x895000) succeed. 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.404 13:51:13 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.404 Malloc1 00:20:47.404 [2024-07-15 13:51:13.879481] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:47.404 Malloc2 00:20:47.662 Malloc3 00:20:47.662 Malloc4 00:20:47.662 Malloc5 00:20:47.662 Malloc6 00:20:47.662 Malloc7 00:20:47.662 Malloc8 00:20:47.920 Malloc9 00:20:47.920 Malloc10 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2534039 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2534039 /var/tmp/bdevperf.sock 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2534039 ']' 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.920 "trsvcid": "$NVMF_PORT", 00:20:47.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.920 "hdgst": ${hdgst:-false}, 00:20:47.920 "ddgst": ${ddgst:-false} 00:20:47.920 }, 00:20:47.920 "method": "bdev_nvme_attach_controller" 00:20:47.920 } 00:20:47.920 EOF 00:20:47.920 )") 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.920 "trsvcid": "$NVMF_PORT", 00:20:47.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.920 "hdgst": ${hdgst:-false}, 00:20:47.920 "ddgst": ${ddgst:-false} 00:20:47.920 }, 00:20:47.920 "method": "bdev_nvme_attach_controller" 00:20:47.920 } 00:20:47.920 EOF 00:20:47.920 )") 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.920 "trsvcid": "$NVMF_PORT", 00:20:47.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.920 "hdgst": ${hdgst:-false}, 00:20:47.920 "ddgst": ${ddgst:-false} 00:20:47.920 }, 00:20:47.920 "method": "bdev_nvme_attach_controller" 00:20:47.920 } 00:20:47.920 EOF 00:20:47.920 )") 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.920 "trsvcid": "$NVMF_PORT", 00:20:47.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.920 "hdgst": ${hdgst:-false}, 00:20:47.920 "ddgst": ${ddgst:-false} 00:20:47.920 }, 00:20:47.920 "method": "bdev_nvme_attach_controller" 00:20:47.920 } 00:20:47.920 EOF 00:20:47.920 )") 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.920 "trsvcid": "$NVMF_PORT", 00:20:47.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.920 "hdgst": ${hdgst:-false}, 00:20:47.920 "ddgst": ${ddgst:-false} 00:20:47.920 }, 00:20:47.920 "method": "bdev_nvme_attach_controller" 00:20:47.920 } 00:20:47.920 EOF 00:20:47.920 )") 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.920 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.920 { 00:20:47.920 "params": { 00:20:47.920 "name": "Nvme$subsystem", 00:20:47.920 "trtype": "$TEST_TRANSPORT", 00:20:47.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.920 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "$NVMF_PORT", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.921 "hdgst": ${hdgst:-false}, 00:20:47.921 "ddgst": ${ddgst:-false} 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 } 00:20:47.921 EOF 00:20:47.921 )") 00:20:47.921 [2024-07-15 13:51:14.380226] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:47.921 [2024-07-15 13:51:14.380285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2534039 ] 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.921 { 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme$subsystem", 00:20:47.921 "trtype": "$TEST_TRANSPORT", 00:20:47.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "$NVMF_PORT", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.921 "hdgst": ${hdgst:-false}, 00:20:47.921 "ddgst": ${ddgst:-false} 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 } 00:20:47.921 EOF 00:20:47.921 )") 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.921 { 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme$subsystem", 00:20:47.921 "trtype": "$TEST_TRANSPORT", 00:20:47.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "$NVMF_PORT", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.921 "hdgst": ${hdgst:-false}, 00:20:47.921 "ddgst": ${ddgst:-false} 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 } 00:20:47.921 EOF 00:20:47.921 )") 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.921 { 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme$subsystem", 00:20:47.921 "trtype": "$TEST_TRANSPORT", 00:20:47.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "$NVMF_PORT", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.921 "hdgst": ${hdgst:-false}, 00:20:47.921 "ddgst": ${ddgst:-false} 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 } 00:20:47.921 EOF 00:20:47.921 )") 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.921 { 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme$subsystem", 00:20:47.921 "trtype": "$TEST_TRANSPORT", 00:20:47.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "$NVMF_PORT", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.921 "hdgst": ${hdgst:-false}, 00:20:47.921 "ddgst": ${ddgst:-false} 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 } 00:20:47.921 EOF 00:20:47.921 )") 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:47.921 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:47.921 13:51:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme1", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme2", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme3", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme4", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme5", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme6", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme7", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme8", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme9", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 },{ 00:20:47.921 "params": { 00:20:47.921 "name": "Nvme10", 00:20:47.921 "trtype": "rdma", 00:20:47.921 "traddr": "192.168.100.8", 00:20:47.921 "adrfam": "ipv4", 00:20:47.921 "trsvcid": "4420", 00:20:47.921 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:47.921 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:47.921 "hdgst": false, 00:20:47.921 "ddgst": false 00:20:47.921 }, 00:20:47.921 "method": "bdev_nvme_attach_controller" 00:20:47.921 }' 00:20:48.179 [2024-07-15 13:51:14.469950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.179 [2024-07-15 13:51:14.553283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.113 Running I/O for 10 seconds... 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.113 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.371 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.371 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=27 00:20:49.371 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 27 -ge 100 ']' 00:20:49.371 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:49.629 13:51:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=179 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 179 -ge 100 ']' 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2533798 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2533798 ']' 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2533798 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.629 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2533798 00:20:49.886 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:49.886 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:49.886 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2533798' 00:20:49.886 killing process with pid 2533798 00:20:49.886 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2533798 00:20:49.886 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2533798 00:20:50.452 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:50.452 13:51:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:51.027 [2024-07-15 13:51:17.250880] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257440 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.253343] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192571c0 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.255953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f40 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.258085] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.260296] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a40 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.262786] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192567c0 was disconnected and freed. reset controller. 00:20:51.027 [2024-07-15 13:51:17.262906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.262948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.263960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.263991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.264036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.264068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.264113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.264145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.027 [2024-07-15 13:51:17.264190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183500 00:20:51.027 [2024-07-15 13:51:17.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.264267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183500 00:20:51.028 [2024-07-15 13:51:17.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183500 00:20:51.028 [2024-07-15 13:51:17.264375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.264420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a42f800 len:0x10000 key:0x183200 00:20:51.028 [2024-07-15 13:51:17.264451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267090] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256540 was disconnected and freed. reset controller. 00:20:51.028 [2024-07-15 13:51:17.267418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aacfd00 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.267996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183900 00:20:51.028 [2024-07-15 13:51:17.268435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.268967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x184200 00:20:51.028 [2024-07-15 13:51:17.269838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.028 [2024-07-15 13:51:17.269874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.269906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.269942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.269973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x184200 00:20:51.029 [2024-07-15 13:51:17.270632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.270700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.270768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.270842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.270914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.270951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.270982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af0f900 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeff880 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeef800 len:0x10000 key:0x183a00 00:20:51.029 [2024-07-15 13:51:17.271814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.271851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183500 00:20:51.029 [2024-07-15 13:51:17.271883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274407] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192562c0 was disconnected and freed. reset controller. 00:20:51.029 [2024-07-15 13:51:17.274465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cfd00 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bfc80 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08fb00 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07fa80 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.274933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.274970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06fa00 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.275001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.275038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.275069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.029 [2024-07-15 13:51:17.275106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x183e00 00:20:51.029 [2024-07-15 13:51:17.275138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f880 len:0x10000 key:0x183e00 00:20:51.030 [2024-07-15 13:51:17.275205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x183e00 00:20:51.030 [2024-07-15 13:51:17.275273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x183e00 00:20:51.030 [2024-07-15 13:51:17.275341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x183e00 00:20:51.030 [2024-07-15 13:51:17.275409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aecf700 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae9f580 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae7f480 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.275936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.275968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f180 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f100 len:0x10000 key:0x183a00 00:20:51.030 [2024-07-15 13:51:17.276377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.276940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x184300 00:20:51.030 [2024-07-15 13:51:17.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.030 [2024-07-15 13:51:17.277886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.277918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.277955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.277986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x184300 00:20:51.031 [2024-07-15 13:51:17.278533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.278643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.278711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5cff00 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.278779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.278852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.278888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0efe00 len:0x10000 key:0x183e00 00:20:51.031 [2024-07-15 13:51:17.278920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281042] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256040 was disconnected and freed. reset controller. 00:20:51.031 [2024-07-15 13:51:17.281097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183300 00:20:51.031 [2024-07-15 13:51:17.281769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.281838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.281906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.281943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.281975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.282012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.282043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.282080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.282111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.282149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.282181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.282218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.282250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.031 [2024-07-15 13:51:17.282287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183f00 00:20:51.031 [2024-07-15 13:51:17.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.282948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.282985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183f00 00:20:51.032 [2024-07-15 13:51:17.283971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.284949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.284986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.285018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.032 [2024-07-15 13:51:17.285054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184100 00:20:51.032 [2024-07-15 13:51:17.285086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8bf680 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b89f580 len:0x10000 key:0x184100 00:20:51.033 [2024-07-15 13:51:17.285493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.285533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183300 00:20:51.033 [2024-07-15 13:51:17.285574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:43346000 sqhd:52b0 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.288872] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019206c40 was disconnected and freed. reset controller. 00:20:51.033 [2024-07-15 13:51:17.289038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.289076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.289110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.289148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.289181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.289212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.289244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.289275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.291736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.291781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:51.033 [2024-07-15 13:51:17.291812] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.291857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.291889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.291922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.291953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.291986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.292016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.292048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.292079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.294621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.294663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:51.033 [2024-07-15 13:51:17.294693] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.294737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.294769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.294801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.294832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.294864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.294894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.294927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.294965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.297158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.297199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:51.033 [2024-07-15 13:51:17.297230] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.297276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.297308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.297341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.297371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.297404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.297434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.297466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.297496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.299720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.299761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:51.033 [2024-07-15 13:51:17.299790] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.299834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.299866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.299899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.299930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.299962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.299993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.300025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.300056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.302368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.302408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:51.033 [2024-07-15 13:51:17.302437] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.302488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.302521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.302553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.302597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.302630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.302661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.302693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.302724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.305215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.033 [2024-07-15 13:51:17.305255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:51.033 [2024-07-15 13:51:17.305285] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.033 [2024-07-15 13:51:17.305329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.305360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.305392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.305423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.033 [2024-07-15 13:51:17.305455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.033 [2024-07-15 13:51:17.305487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.305519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.305550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.307625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.034 [2024-07-15 13:51:17.307667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:51.034 [2024-07-15 13:51:17.307695] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.307740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.307773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.307805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.307835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.307875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.307938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.309991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.034 [2024-07-15 13:51:17.310032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:51.034 [2024-07-15 13:51:17.310062] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.310106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.310139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.310171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.310201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.310234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.310264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.310297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.310327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.312312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.034 [2024-07-15 13:51:17.312353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:51.034 [2024-07-15 13:51:17.312383] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.312429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.312461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.312493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.312523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.312556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.312595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.312627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.034 [2024-07-15 13:51:17.312658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:63423 cdw0:0 sqhd:3800 p:0 m:0 dnr:0 00:20:51.034 [2024-07-15 13:51:17.344357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.034 [2024-07-15 13:51:17.344410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:51.034 [2024-07-15 13:51:17.344442] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354147] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354162] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354175] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354186] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354201] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354213] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354224] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:51.034 [2024-07-15 13:51:17.354311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:51.034 [2024-07-15 13:51:17.354344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:51.034 [2024-07-15 13:51:17.356664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:51.034 task offset: 32768 on job bdev=Nvme6n1 fails 00:20:51.034 00:20:51.034 Latency(us) 00:20:51.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.034 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme1n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme1n1 : 1.93 145.35 9.08 33.22 0.00 356271.16 5214.39 1086871.82 00:20:51.034 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme2n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme2n1 : 1.93 141.13 8.82 33.21 0.00 361841.32 10542.75 1086871.82 00:20:51.034 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme3n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme3n1 : 1.93 149.35 9.33 33.19 0.00 342763.32 15614.66 1094166.26 00:20:51.034 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme4n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme4n1 : 1.93 152.91 9.56 33.17 0.00 333402.63 2763.91 1094166.26 00:20:51.034 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme5n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme5n1 : 1.93 140.91 8.81 33.16 0.00 353474.45 29861.62 1094166.26 00:20:51.034 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme6n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme6n1 : 1.93 132.56 8.28 33.14 0.00 367868.88 50605.19 1086871.82 00:20:51.034 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme7n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme7n1 : 1.93 132.50 8.28 33.12 0.00 358940.67 65649.98 1086871.82 00:20:51.034 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme8n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme8n1 : 1.93 132.43 8.28 33.11 0.00 362518.08 54708.31 1159816.24 00:20:51.034 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme9n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme9n1 : 1.93 132.37 8.27 33.09 0.00 359613.13 39435.58 1145227.35 00:20:51.034 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.034 Job: Nvme10n1 ended in about 1.93 seconds with error 00:20:51.034 Verification LBA range: start 0x0 length 0x400 00:20:51.034 Nvme10n1 : 1.93 99.24 6.20 33.08 0.00 445634.11 39663.53 1130638.47 00:20:51.034 =================================================================================================================== 00:20:51.034 Total : 1358.74 84.92 331.49 0.00 361934.11 2763.91 1159816.24 00:20:51.034 [2024-07-15 13:51:17.378989] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:51.034 [2024-07-15 13:51:17.379022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:51.034 [2024-07-15 13:51:17.379037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:51.034 [2024-07-15 13:51:17.388796] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.034 [2024-07-15 13:51:17.388857] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.034 [2024-07-15 13:51:17.388887] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:20:51.034 [2024-07-15 13:51:17.389003] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.034 [2024-07-15 13:51:17.389037] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.034 [2024-07-15 13:51:17.389062] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:20:51.034 [2024-07-15 13:51:17.389174] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.034 [2024-07-15 13:51:17.389197] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.389213] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:20:51.035 [2024-07-15 13:51:17.392741] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.392791] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.392817] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:20:51.035 [2024-07-15 13:51:17.392946] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.392981] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.393014] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:20:51.035 [2024-07-15 13:51:17.393135] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.393169] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.393194] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:20:51.035 [2024-07-15 13:51:17.393295] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.393328] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.393352] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:20:51.035 [2024-07-15 13:51:17.394226] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.394268] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.394294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:20:51.035 [2024-07-15 13:51:17.394402] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.394436] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.394460] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:20:51.035 [2024-07-15 13:51:17.394587] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.035 [2024-07-15 13:51:17.394622] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.035 [2024-07-15 13:51:17.394646] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2534039 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:51.294 rmmod nvme_rdma 00:20:51.294 rmmod nvme_fabrics 00:20:51.294 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2534039 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:51.294 00:20:51.294 real 0m5.376s 00:20:51.294 user 0m18.057s 00:20:51.294 sys 0m1.433s 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.294 ************************************ 00:20:51.294 END TEST nvmf_shutdown_tc3 00:20:51.294 ************************************ 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:51.294 00:20:51.294 real 0m25.418s 00:20:51.294 user 1m12.775s 00:20:51.294 sys 0m9.508s 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.294 13:51:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.294 ************************************ 00:20:51.294 END TEST nvmf_shutdown 00:20:51.294 ************************************ 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:51.553 13:51:17 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:51.553 13:51:17 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:51.553 13:51:17 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:51.553 13:51:17 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.553 13:51:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:51.553 ************************************ 00:20:51.553 START TEST nvmf_multicontroller 00:20:51.553 ************************************ 00:20:51.553 13:51:17 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:20:51.553 * Looking for test storage... 00:20:51.553 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.553 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:51.812 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:20:51.812 00:20:51.812 real 0m0.144s 00:20:51.812 user 0m0.058s 00:20:51.812 sys 0m0.096s 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.812 13:51:18 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.812 ************************************ 00:20:51.813 END TEST nvmf_multicontroller 00:20:51.813 ************************************ 00:20:51.813 13:51:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:51.813 13:51:18 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:51.813 13:51:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:51.813 13:51:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.813 13:51:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:51.813 ************************************ 00:20:51.813 START TEST nvmf_aer 00:20:51.813 ************************************ 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:20:51.813 * Looking for test storage... 00:20:51.813 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.813 13:51:18 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:58.431 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:58.431 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.431 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:58.432 Found net devices under 0000:18:00.0: mlx_0_0 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:58.432 Found net devices under 0000:18:00.1: mlx_0_1 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:58.432 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:58.692 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:58.692 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:20:58.692 altname enp24s0f0np0 00:20:58.692 altname ens785f0np0 00:20:58.692 inet 192.168.100.8/24 scope global mlx_0_0 00:20:58.692 valid_lft forever preferred_lft forever 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:58.692 13:51:24 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:58.692 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:58.692 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:20:58.692 altname enp24s0f1np1 00:20:58.692 altname ens785f1np1 00:20:58.692 inet 192.168.100.9/24 scope global mlx_0_1 00:20:58.692 valid_lft forever preferred_lft forever 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:58.692 192.168.100.9' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:58.692 192.168.100.9' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:58.692 192.168.100.9' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2537586 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2537586 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2537586 ']' 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.692 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.693 13:51:25 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:58.693 [2024-07-15 13:51:25.184010] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:20:58.693 [2024-07-15 13:51:25.184073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.693 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.951 [2024-07-15 13:51:25.269686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.951 [2024-07-15 13:51:25.357771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.951 [2024-07-15 13:51:25.357820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.951 [2024-07-15 13:51:25.357830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.951 [2024-07-15 13:51:25.357839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.951 [2024-07-15 13:51:25.357846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.951 [2024-07-15 13:51:25.357908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.951 [2024-07-15 13:51:25.357994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.951 [2024-07-15 13:51:25.358098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.951 [2024-07-15 13:51:25.358099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.516 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.516 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:59.516 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.516 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.516 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 [2024-07-15 13:51:26.080427] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f2180/0x6f6670) succeed. 00:20:59.775 [2024-07-15 13:51:26.089957] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f37c0/0x737d00) succeed. 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 Malloc0 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 [2024-07-15 13:51:26.269172] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 [ 00:20:59.775 { 00:20:59.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:59.775 "subtype": "Discovery", 00:20:59.775 "listen_addresses": [], 00:20:59.775 "allow_any_host": true, 00:20:59.775 "hosts": [] 00:20:59.775 }, 00:20:59.775 { 00:20:59.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.775 "subtype": "NVMe", 00:20:59.775 "listen_addresses": [ 00:20:59.775 { 00:20:59.775 "trtype": "RDMA", 00:20:59.775 "adrfam": "IPv4", 00:20:59.775 "traddr": "192.168.100.8", 00:20:59.775 "trsvcid": "4420" 00:20:59.775 } 00:20:59.775 ], 00:20:59.775 "allow_any_host": true, 00:20:59.775 "hosts": [], 00:20:59.775 "serial_number": "SPDK00000000000001", 00:20:59.775 "model_number": "SPDK bdev Controller", 00:20:59.775 "max_namespaces": 2, 00:20:59.775 "min_cntlid": 1, 00:20:59.775 "max_cntlid": 65519, 00:20:59.775 "namespaces": [ 00:20:59.775 { 00:20:59.775 "nsid": 1, 00:20:59.775 "bdev_name": "Malloc0", 00:20:59.775 "name": "Malloc0", 00:20:59.775 "nguid": "027DDBFB571841BBA624CAB6A8DAA567", 00:20:59.775 "uuid": "027ddbfb-5718-41bb-a624-cab6a8daa567" 00:20:59.775 } 00:20:59.775 ] 00:20:59.775 } 00:20:59.775 ] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=2537724 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:59.775 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:00.033 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.033 Malloc1 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.033 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.292 [ 00:21:00.292 { 00:21:00.292 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:00.292 "subtype": "Discovery", 00:21:00.292 "listen_addresses": [], 00:21:00.292 "allow_any_host": true, 00:21:00.292 "hosts": [] 00:21:00.292 }, 00:21:00.292 { 00:21:00.292 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.292 "subtype": "NVMe", 00:21:00.292 "listen_addresses": [ 00:21:00.292 { 00:21:00.292 "trtype": "RDMA", 00:21:00.292 "adrfam": "IPv4", 00:21:00.292 "traddr": "192.168.100.8", 00:21:00.292 "trsvcid": "4420" 00:21:00.292 } 00:21:00.292 ], 00:21:00.292 "allow_any_host": true, 00:21:00.292 "hosts": [], 00:21:00.292 "serial_number": "SPDK00000000000001", 00:21:00.292 "model_number": "SPDK bdev Controller", 00:21:00.292 "max_namespaces": 2, 00:21:00.292 "min_cntlid": 1, 00:21:00.292 "max_cntlid": 65519, 00:21:00.292 "namespaces": [ 00:21:00.292 { 00:21:00.292 "nsid": 1, 00:21:00.292 "bdev_name": "Malloc0", 00:21:00.292 "name": "Malloc0", 00:21:00.292 "nguid": "027DDBFB571841BBA624CAB6A8DAA567", 00:21:00.292 "uuid": "027ddbfb-5718-41bb-a624-cab6a8daa567" 00:21:00.292 }, 00:21:00.292 { 00:21:00.292 "nsid": 2, 00:21:00.292 "bdev_name": "Malloc1", 00:21:00.292 "name": "Malloc1", 00:21:00.292 "nguid": "1ADE83047C524738A7E7EC42172A54A2", 00:21:00.293 "uuid": "1ade8304-7c52-4738-a7e7-ec42172a54a2" 00:21:00.293 } 00:21:00.293 ] 00:21:00.293 } 00:21:00.293 ] 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 2537724 00:21:00.293 Asynchronous Event Request test 00:21:00.293 Attaching to 192.168.100.8 00:21:00.293 Attached to 192.168.100.8 00:21:00.293 Registering asynchronous event callbacks... 00:21:00.293 Starting namespace attribute notice tests for all controllers... 00:21:00.293 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:00.293 aer_cb - Changed Namespace 00:21:00.293 Cleaning up... 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:00.293 rmmod nvme_rdma 00:21:00.293 rmmod nvme_fabrics 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2537586 ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2537586 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2537586 ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2537586 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2537586 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2537586' 00:21:00.293 killing process with pid 2537586 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2537586 00:21:00.293 13:51:26 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2537586 00:21:00.552 13:51:27 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.552 13:51:27 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:00.552 00:21:00.552 real 0m8.865s 00:21:00.552 user 0m8.580s 00:21:00.552 sys 0m5.766s 00:21:00.552 13:51:27 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.552 13:51:27 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.552 ************************************ 00:21:00.552 END TEST nvmf_aer 00:21:00.552 ************************************ 00:21:00.812 13:51:27 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:00.812 13:51:27 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:00.812 13:51:27 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.812 13:51:27 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.812 13:51:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:00.812 ************************************ 00:21:00.812 START TEST nvmf_async_init 00:21:00.812 ************************************ 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:00.812 * Looking for test storage... 00:21:00.812 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=835884391d514931be3ea9ac131c302d 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.812 13:51:27 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:07.379 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:07.380 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:07.380 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:07.380 Found net devices under 0000:18:00.0: mlx_0_0 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:07.380 Found net devices under 0000:18:00.1: mlx_0_1 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:07.380 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:07.639 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.639 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:07.639 altname enp24s0f0np0 00:21:07.639 altname ens785f0np0 00:21:07.639 inet 192.168.100.8/24 scope global mlx_0_0 00:21:07.639 valid_lft forever preferred_lft forever 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:07.639 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:07.639 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:07.639 altname enp24s0f1np1 00:21:07.639 altname ens785f1np1 00:21:07.639 inet 192.168.100.9/24 scope global mlx_0_1 00:21:07.639 valid_lft forever preferred_lft forever 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:07.639 13:51:33 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:07.639 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:07.640 192.168.100.9' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:07.640 192.168.100.9' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:07.640 192.168.100.9' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2540694 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2540694 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2540694 ']' 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.640 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.640 [2024-07-15 13:51:34.149429] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:07.640 [2024-07-15 13:51:34.149495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.911 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.911 [2024-07-15 13:51:34.239025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.911 [2024-07-15 13:51:34.328238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.911 [2024-07-15 13:51:34.328283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.911 [2024-07-15 13:51:34.328292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.911 [2024-07-15 13:51:34.328301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.911 [2024-07-15 13:51:34.328307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.911 [2024-07-15 13:51:34.328338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.478 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.478 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:08.478 13:51:34 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.478 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.478 13:51:34 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 [2024-07-15 13:51:35.042890] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18b8f60/0x18bd450) succeed. 00:21:08.737 [2024-07-15 13:51:35.053193] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18ba460/0x18feae0) succeed. 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 null0 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 835884391d514931be3ea9ac131c302d 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 [2024-07-15 13:51:35.150005] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 nvme0n1 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.737 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.737 [ 00:21:08.737 { 00:21:08.737 "name": "nvme0n1", 00:21:08.737 "aliases": [ 00:21:08.737 "83588439-1d51-4931-be3e-a9ac131c302d" 00:21:08.737 ], 00:21:08.737 "product_name": "NVMe disk", 00:21:08.737 "block_size": 512, 00:21:08.737 "num_blocks": 2097152, 00:21:08.737 "uuid": "83588439-1d51-4931-be3e-a9ac131c302d", 00:21:08.737 "assigned_rate_limits": { 00:21:08.737 "rw_ios_per_sec": 0, 00:21:08.737 "rw_mbytes_per_sec": 0, 00:21:08.737 "r_mbytes_per_sec": 0, 00:21:08.737 "w_mbytes_per_sec": 0 00:21:08.737 }, 00:21:08.737 "claimed": false, 00:21:08.737 "zoned": false, 00:21:08.737 "supported_io_types": { 00:21:08.737 "read": true, 00:21:08.737 "write": true, 00:21:08.737 "unmap": false, 00:21:08.737 "flush": true, 00:21:08.737 "reset": true, 00:21:08.737 "nvme_admin": true, 00:21:08.737 "nvme_io": true, 00:21:08.737 "nvme_io_md": false, 00:21:08.737 "write_zeroes": true, 00:21:08.737 "zcopy": false, 00:21:08.737 "get_zone_info": false, 00:21:08.737 "zone_management": false, 00:21:08.737 "zone_append": false, 00:21:08.737 "compare": true, 00:21:08.737 "compare_and_write": true, 00:21:08.737 "abort": true, 00:21:08.737 "seek_hole": false, 00:21:08.737 "seek_data": false, 00:21:08.737 "copy": true, 00:21:08.737 "nvme_iov_md": false 00:21:08.737 }, 00:21:08.737 "memory_domains": [ 00:21:08.737 { 00:21:08.737 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:08.737 "dma_device_type": 0 00:21:08.737 } 00:21:08.737 ], 00:21:08.737 "driver_specific": { 00:21:08.737 "nvme": [ 00:21:08.737 { 00:21:08.737 "trid": { 00:21:08.737 "trtype": "RDMA", 00:21:08.737 "adrfam": "IPv4", 00:21:08.737 "traddr": "192.168.100.8", 00:21:08.737 "trsvcid": "4420", 00:21:08.996 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:08.996 }, 00:21:08.996 "ctrlr_data": { 00:21:08.996 "cntlid": 1, 00:21:08.996 "vendor_id": "0x8086", 00:21:08.996 "model_number": "SPDK bdev Controller", 00:21:08.996 "serial_number": "00000000000000000000", 00:21:08.996 "firmware_revision": "24.09", 00:21:08.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.996 "oacs": { 00:21:08.996 "security": 0, 00:21:08.996 "format": 0, 00:21:08.996 "firmware": 0, 00:21:08.996 "ns_manage": 0 00:21:08.996 }, 00:21:08.996 "multi_ctrlr": true, 00:21:08.996 "ana_reporting": false 00:21:08.996 }, 00:21:08.996 "vs": { 00:21:08.996 "nvme_version": "1.3" 00:21:08.996 }, 00:21:08.996 "ns_data": { 00:21:08.996 "id": 1, 00:21:08.996 "can_share": true 00:21:08.996 } 00:21:08.996 } 00:21:08.996 ], 00:21:08.996 "mp_policy": "active_passive" 00:21:08.996 } 00:21:08.996 } 00:21:08.996 ] 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.996 [2024-07-15 13:51:35.271635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:08.996 [2024-07-15 13:51:35.298729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:08.996 [2024-07-15 13:51:35.320467] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.996 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.996 [ 00:21:08.996 { 00:21:08.996 "name": "nvme0n1", 00:21:08.996 "aliases": [ 00:21:08.996 "83588439-1d51-4931-be3e-a9ac131c302d" 00:21:08.996 ], 00:21:08.996 "product_name": "NVMe disk", 00:21:08.996 "block_size": 512, 00:21:08.996 "num_blocks": 2097152, 00:21:08.996 "uuid": "83588439-1d51-4931-be3e-a9ac131c302d", 00:21:08.996 "assigned_rate_limits": { 00:21:08.996 "rw_ios_per_sec": 0, 00:21:08.996 "rw_mbytes_per_sec": 0, 00:21:08.997 "r_mbytes_per_sec": 0, 00:21:08.997 "w_mbytes_per_sec": 0 00:21:08.997 }, 00:21:08.997 "claimed": false, 00:21:08.997 "zoned": false, 00:21:08.997 "supported_io_types": { 00:21:08.997 "read": true, 00:21:08.997 "write": true, 00:21:08.997 "unmap": false, 00:21:08.997 "flush": true, 00:21:08.997 "reset": true, 00:21:08.997 "nvme_admin": true, 00:21:08.997 "nvme_io": true, 00:21:08.997 "nvme_io_md": false, 00:21:08.997 "write_zeroes": true, 00:21:08.997 "zcopy": false, 00:21:08.997 "get_zone_info": false, 00:21:08.997 "zone_management": false, 00:21:08.997 "zone_append": false, 00:21:08.997 "compare": true, 00:21:08.997 "compare_and_write": true, 00:21:08.997 "abort": true, 00:21:08.997 "seek_hole": false, 00:21:08.997 "seek_data": false, 00:21:08.997 "copy": true, 00:21:08.997 "nvme_iov_md": false 00:21:08.997 }, 00:21:08.997 "memory_domains": [ 00:21:08.997 { 00:21:08.997 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:08.997 "dma_device_type": 0 00:21:08.997 } 00:21:08.997 ], 00:21:08.997 "driver_specific": { 00:21:08.997 "nvme": [ 00:21:08.997 { 00:21:08.997 "trid": { 00:21:08.997 "trtype": "RDMA", 00:21:08.997 "adrfam": "IPv4", 00:21:08.997 "traddr": "192.168.100.8", 00:21:08.997 "trsvcid": "4420", 00:21:08.997 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:08.997 }, 00:21:08.997 "ctrlr_data": { 00:21:08.997 "cntlid": 2, 00:21:08.997 "vendor_id": "0x8086", 00:21:08.997 "model_number": "SPDK bdev Controller", 00:21:08.997 "serial_number": "00000000000000000000", 00:21:08.997 "firmware_revision": "24.09", 00:21:08.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.997 "oacs": { 00:21:08.997 "security": 0, 00:21:08.997 "format": 0, 00:21:08.997 "firmware": 0, 00:21:08.997 "ns_manage": 0 00:21:08.997 }, 00:21:08.997 "multi_ctrlr": true, 00:21:08.997 "ana_reporting": false 00:21:08.997 }, 00:21:08.997 "vs": { 00:21:08.997 "nvme_version": "1.3" 00:21:08.997 }, 00:21:08.997 "ns_data": { 00:21:08.997 "id": 1, 00:21:08.997 "can_share": true 00:21:08.997 } 00:21:08.997 } 00:21:08.997 ], 00:21:08.997 "mp_policy": "active_passive" 00:21:08.997 } 00:21:08.997 } 00:21:08.997 ] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.92HSI8v7ZI 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.92HSI8v7ZI 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 [2024-07-15 13:51:35.403740] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.92HSI8v7ZI 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.92HSI8v7ZI 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 [2024-07-15 13:51:35.423774] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.997 nvme0n1 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.997 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.997 [ 00:21:08.997 { 00:21:08.997 "name": "nvme0n1", 00:21:08.997 "aliases": [ 00:21:08.997 "83588439-1d51-4931-be3e-a9ac131c302d" 00:21:08.997 ], 00:21:08.997 "product_name": "NVMe disk", 00:21:08.997 "block_size": 512, 00:21:08.997 "num_blocks": 2097152, 00:21:08.997 "uuid": "83588439-1d51-4931-be3e-a9ac131c302d", 00:21:08.997 "assigned_rate_limits": { 00:21:08.997 "rw_ios_per_sec": 0, 00:21:08.997 "rw_mbytes_per_sec": 0, 00:21:08.997 "r_mbytes_per_sec": 0, 00:21:08.997 "w_mbytes_per_sec": 0 00:21:08.997 }, 00:21:08.997 "claimed": false, 00:21:08.997 "zoned": false, 00:21:08.997 "supported_io_types": { 00:21:08.997 "read": true, 00:21:08.997 "write": true, 00:21:08.997 "unmap": false, 00:21:08.997 "flush": true, 00:21:08.997 "reset": true, 00:21:08.997 "nvme_admin": true, 00:21:08.997 "nvme_io": true, 00:21:08.997 "nvme_io_md": false, 00:21:08.997 "write_zeroes": true, 00:21:08.997 "zcopy": false, 00:21:08.997 "get_zone_info": false, 00:21:08.997 "zone_management": false, 00:21:08.997 "zone_append": false, 00:21:08.997 "compare": true, 00:21:08.997 "compare_and_write": true, 00:21:08.997 "abort": true, 00:21:08.997 "seek_hole": false, 00:21:08.997 "seek_data": false, 00:21:08.997 "copy": true, 00:21:08.997 "nvme_iov_md": false 00:21:08.997 }, 00:21:08.997 "memory_domains": [ 00:21:08.997 { 00:21:08.997 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:08.997 "dma_device_type": 0 00:21:08.997 } 00:21:08.997 ], 00:21:08.997 "driver_specific": { 00:21:08.997 "nvme": [ 00:21:08.997 { 00:21:08.997 "trid": { 00:21:08.997 "trtype": "RDMA", 00:21:08.997 "adrfam": "IPv4", 00:21:08.997 "traddr": "192.168.100.8", 00:21:08.998 "trsvcid": "4421", 00:21:08.998 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:08.998 }, 00:21:08.998 "ctrlr_data": { 00:21:08.998 "cntlid": 3, 00:21:08.998 "vendor_id": "0x8086", 00:21:08.998 "model_number": "SPDK bdev Controller", 00:21:08.998 "serial_number": "00000000000000000000", 00:21:08.998 "firmware_revision": "24.09", 00:21:08.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.998 "oacs": { 00:21:08.998 "security": 0, 00:21:08.998 "format": 0, 00:21:08.998 "firmware": 0, 00:21:08.998 "ns_manage": 0 00:21:08.998 }, 00:21:08.998 "multi_ctrlr": true, 00:21:08.998 "ana_reporting": false 00:21:08.998 }, 00:21:08.998 "vs": { 00:21:08.998 "nvme_version": "1.3" 00:21:08.998 }, 00:21:08.998 "ns_data": { 00:21:08.998 "id": 1, 00:21:08.998 "can_share": true 00:21:08.998 } 00:21:08.998 } 00:21:08.998 ], 00:21:08.998 "mp_policy": "active_passive" 00:21:08.998 } 00:21:08.998 } 00:21:08.998 ] 00:21:08.998 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.998 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.998 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.92HSI8v7ZI 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:09.257 rmmod nvme_rdma 00:21:09.257 rmmod nvme_fabrics 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2540694 ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2540694 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2540694 ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2540694 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2540694 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2540694' 00:21:09.257 killing process with pid 2540694 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2540694 00:21:09.257 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2540694 00:21:09.516 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.516 13:51:35 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:09.516 00:21:09.516 real 0m8.731s 00:21:09.516 user 0m3.743s 00:21:09.516 sys 0m5.741s 00:21:09.516 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.516 13:51:35 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:09.516 ************************************ 00:21:09.516 END TEST nvmf_async_init 00:21:09.516 ************************************ 00:21:09.516 13:51:35 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:09.516 13:51:35 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:09.516 13:51:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:09.516 13:51:35 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.516 13:51:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:09.516 ************************************ 00:21:09.516 START TEST dma 00:21:09.516 ************************************ 00:21:09.516 13:51:35 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:09.775 * Looking for test storage... 00:21:09.775 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:09.775 13:51:36 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.775 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:21:09.775 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.775 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:09.776 13:51:36 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.776 13:51:36 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.776 13:51:36 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.776 13:51:36 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.776 13:51:36 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.776 13:51:36 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.776 13:51:36 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:21:09.776 13:51:36 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.776 13:51:36 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:09.776 13:51:36 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:09.776 13:51:36 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:09.776 13:51:36 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:21:09.776 13:51:36 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.776 13:51:36 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.776 13:51:36 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:09.776 13:51:36 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.776 13:51:36 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:16.374 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:16.374 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:16.374 Found net devices under 0000:18:00.0: mlx_0_0 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:16.374 Found net devices under 0000:18:00.1: mlx_0_1 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:16.374 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:16.375 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:16.375 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:16.375 altname enp24s0f0np0 00:21:16.375 altname ens785f0np0 00:21:16.375 inet 192.168.100.8/24 scope global mlx_0_0 00:21:16.375 valid_lft forever preferred_lft forever 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:16.375 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:16.375 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:16.375 altname enp24s0f1np1 00:21:16.375 altname ens785f1np1 00:21:16.375 inet 192.168.100.9/24 scope global mlx_0_1 00:21:16.375 valid_lft forever preferred_lft forever 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:16.375 192.168.100.9' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:16.375 192.168.100.9' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:16.375 192.168.100.9' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:16.375 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:16.671 13:51:42 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:21:16.671 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.671 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.671 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:16.671 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=2543796 00:21:16.671 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:16.671 13:51:42 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 2543796 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@829 -- # '[' -z 2543796 ']' 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.672 13:51:42 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:16.672 [2024-07-15 13:51:42.963148] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:16.672 [2024-07-15 13:51:42.963210] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.672 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.672 [2024-07-15 13:51:43.051096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:16.672 [2024-07-15 13:51:43.142043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.672 [2024-07-15 13:51:43.142084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.672 [2024-07-15 13:51:43.142094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.672 [2024-07-15 13:51:43.142103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.672 [2024-07-15 13:51:43.142111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.672 [2024-07-15 13:51:43.142176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.672 [2024-07-15 13:51:43.142175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@862 -- # return 0 00:21:17.607 13:51:43 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 13:51:43 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.607 13:51:43 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 [2024-07-15 13:51:43.857934] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d6ea30/0x1d72f20) succeed. 00:21:17.607 [2024-07-15 13:51:43.867154] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d6ff30/0x1db45b0) succeed. 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.607 13:51:43 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.607 13:51:43 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 Malloc0 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.607 13:51:44 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.607 13:51:44 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.607 13:51:44 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.607 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 [2024-07-15 13:51:44.051866] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:17.608 13:51:44 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.608 13:51:44 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:21:17.608 13:51:44 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.608 { 00:21:17.608 "params": { 00:21:17.608 "name": "Nvme$subsystem", 00:21:17.608 "trtype": "$TEST_TRANSPORT", 00:21:17.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.608 "adrfam": "ipv4", 00:21:17.608 "trsvcid": "$NVMF_PORT", 00:21:17.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.608 "hdgst": ${hdgst:-false}, 00:21:17.608 "ddgst": ${ddgst:-false} 00:21:17.608 }, 00:21:17.608 "method": "bdev_nvme_attach_controller" 00:21:17.608 } 00:21:17.608 EOF 00:21:17.608 )") 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:21:17.608 13:51:44 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:17.608 "params": { 00:21:17.608 "name": "Nvme0", 00:21:17.608 "trtype": "rdma", 00:21:17.608 "traddr": "192.168.100.8", 00:21:17.608 "adrfam": "ipv4", 00:21:17.608 "trsvcid": "4420", 00:21:17.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.608 "hdgst": false, 00:21:17.608 "ddgst": false 00:21:17.608 }, 00:21:17.608 "method": "bdev_nvme_attach_controller" 00:21:17.608 }' 00:21:17.608 [2024-07-15 13:51:44.101795] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:17.608 [2024-07-15 13:51:44.101855] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544000 ] 00:21:17.608 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.866 [2024-07-15 13:51:44.184301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:17.866 [2024-07-15 13:51:44.267494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.866 [2024-07-15 13:51:44.267495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.423 bdev Nvme0n1 reports 1 memory domains 00:21:24.423 bdev Nvme0n1 supports RDMA memory domain 00:21:24.423 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:24.423 ========================================================================== 00:21:24.423 Latency [us] 00:21:24.423 IOPS MiB/s Average min max 00:21:24.423 Core 2: 21495.61 83.97 743.66 255.68 8083.41 00:21:24.423 Core 3: 21683.78 84.70 737.15 258.26 8166.13 00:21:24.423 ========================================================================== 00:21:24.423 Total : 43179.39 168.67 740.39 255.68 8166.13 00:21:24.423 00:21:24.423 Total operations: 215931, translate 215931 pull_push 0 memzero 0 00:21:24.423 13:51:49 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:21:24.423 13:51:49 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:21:24.423 13:51:49 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:21:24.423 [2024-07-15 13:51:49.715260] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:24.423 [2024-07-15 13:51:49.715320] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544729 ] 00:21:24.423 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.423 [2024-07-15 13:51:49.800234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.423 [2024-07-15 13:51:49.882526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.423 [2024-07-15 13:51:49.882527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.690 bdev Malloc0 reports 2 memory domains 00:21:29.690 bdev Malloc0 doesn't support RDMA memory domain 00:21:29.690 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:29.690 ========================================================================== 00:21:29.690 Latency [us] 00:21:29.690 IOPS MiB/s Average min max 00:21:29.690 Core 2: 14419.66 56.33 1108.84 417.67 1807.09 00:21:29.690 Core 3: 14491.64 56.61 1103.32 413.39 2145.82 00:21:29.690 ========================================================================== 00:21:29.690 Total : 28911.29 112.93 1106.07 413.39 2145.82 00:21:29.690 00:21:29.690 Total operations: 144604, translate 0 pull_push 578416 memzero 0 00:21:29.690 13:51:55 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:21:29.690 13:51:55 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:21:29.690 13:51:55 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:29.690 13:51:55 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:21:29.690 Ignoring -M option 00:21:29.690 [2024-07-15 13:51:55.254989] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:29.690 [2024-07-15 13:51:55.255051] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545450 ] 00:21:29.690 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.690 [2024-07-15 13:51:55.337909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.690 [2024-07-15 13:51:55.420426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.690 [2024-07-15 13:51:55.420427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.950 bdev f0ff849c-1a4e-4e5c-b1db-43e17efeafcf reports 1 memory domains 00:21:34.950 bdev f0ff849c-1a4e-4e5c-b1db-43e17efeafcf supports RDMA memory domain 00:21:34.950 Initialization complete, running randread IO for 5 sec on 2 cores 00:21:34.950 ========================================================================== 00:21:34.950 Latency [us] 00:21:34.950 IOPS MiB/s Average min max 00:21:34.950 Core 2: 80291.18 313.64 198.60 80.86 3681.01 00:21:34.950 Core 3: 81006.70 316.43 196.83 76.74 3448.10 00:21:34.950 ========================================================================== 00:21:34.950 Total : 161297.88 630.07 197.71 76.74 3681.01 00:21:34.950 00:21:34.950 Total operations: 806586, translate 0 pull_push 0 memzero 806586 00:21:34.950 13:52:00 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:21:34.950 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.950 [2024-07-15 13:52:01.029784] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:36.856 Initializing NVMe Controllers 00:21:36.856 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:21:36.856 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:36.856 Initialization complete. Launching workers. 00:21:36.856 ======================================================== 00:21:36.856 Latency(us) 00:21:36.856 Device Information : IOPS MiB/s Average min max 00:21:36.856 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.21 5980.47 9977.66 00:21:36.856 ======================================================== 00:21:36.856 Total : 2016.00 7.88 7972.21 5980.47 9977.66 00:21:36.856 00:21:36.856 13:52:03 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:21:36.856 13:52:03 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:21:36.856 13:52:03 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:21:36.856 13:52:03 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:21:36.856 [2024-07-15 13:52:03.373620] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:36.856 [2024-07-15 13:52:03.373679] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546535 ] 00:21:37.115 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.115 [2024-07-15 13:52:03.459460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:37.115 [2024-07-15 13:52:03.542995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.115 [2024-07-15 13:52:03.542995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.700 bdev 37cd90fb-3749-4c50-825e-44ce49ee85ed reports 1 memory domains 00:21:43.700 bdev 37cd90fb-3749-4c50-825e-44ce49ee85ed supports RDMA memory domain 00:21:43.700 Initialization complete, running randrw IO for 5 sec on 2 cores 00:21:43.700 ========================================================================== 00:21:43.700 Latency [us] 00:21:43.700 IOPS MiB/s Average min max 00:21:43.700 Core 2: 18961.20 74.07 842.87 52.01 10831.52 00:21:43.700 Core 3: 19261.73 75.24 829.72 11.61 10971.39 00:21:43.700 ========================================================================== 00:21:43.700 Total : 38222.93 149.31 836.24 11.61 10971.39 00:21:43.700 00:21:43.700 Total operations: 191157, translate 191053 pull_push 0 memzero 104 00:21:43.700 13:52:09 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:21:43.700 13:52:09 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:43.700 rmmod nvme_rdma 00:21:43.700 rmmod nvme_fabrics 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 2543796 ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # '[' -z 2543796 ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # kill -0 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # uname 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2543796' 00:21:43.700 killing process with pid 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # kill 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@972 -- # wait 2543796 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.700 13:52:09 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:43.700 00:21:43.700 real 0m33.485s 00:21:43.700 user 1m37.395s 00:21:43.700 sys 0m6.566s 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:43.700 13:52:09 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:21:43.700 ************************************ 00:21:43.700 END TEST dma 00:21:43.700 ************************************ 00:21:43.700 13:52:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:43.700 13:52:09 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:43.700 13:52:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:43.700 13:52:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:43.700 13:52:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:43.700 ************************************ 00:21:43.700 START TEST nvmf_identify 00:21:43.700 ************************************ 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:21:43.700 * Looking for test storage... 00:21:43.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.700 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.701 13:52:09 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.272 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:50.273 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:50.273 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:50.273 Found net devices under 0000:18:00.0: mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:50.273 Found net devices under 0000:18:00.1: mlx_0_1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:50.273 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.273 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:50.273 altname enp24s0f0np0 00:21:50.273 altname ens785f0np0 00:21:50.273 inet 192.168.100.8/24 scope global mlx_0_0 00:21:50.273 valid_lft forever preferred_lft forever 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:50.273 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:50.273 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:50.273 altname enp24s0f1np1 00:21:50.273 altname ens785f1np1 00:21:50.273 inet 192.168.100.9/24 scope global mlx_0_1 00:21:50.273 valid_lft forever preferred_lft forever 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:50.273 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:50.274 192.168.100.9' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:50.274 192.168.100.9' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:50.274 192.168.100.9' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2550161 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2550161 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2550161 ']' 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.274 13:52:16 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.274 [2024-07-15 13:52:16.555294] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:50.274 [2024-07-15 13:52:16.555355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.274 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.274 [2024-07-15 13:52:16.644838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.274 [2024-07-15 13:52:16.735617] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.274 [2024-07-15 13:52:16.735662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.274 [2024-07-15 13:52:16.735672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.274 [2024-07-15 13:52:16.735681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.274 [2024-07-15 13:52:16.735689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.274 [2024-07-15 13:52:16.735738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.274 [2024-07-15 13:52:16.735771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.274 [2024-07-15 13:52:16.735875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.274 [2024-07-15 13:52:16.735876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 [2024-07-15 13:52:17.407270] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc1b180/0xc1f670) succeed. 00:21:51.208 [2024-07-15 13:52:17.416783] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc1c7c0/0xc60d00) succeed. 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 Malloc0 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 [2024-07-15 13:52:17.639141] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.208 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.208 [ 00:21:51.208 { 00:21:51.208 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.208 "subtype": "Discovery", 00:21:51.208 "listen_addresses": [ 00:21:51.208 { 00:21:51.208 "trtype": "RDMA", 00:21:51.208 "adrfam": "IPv4", 00:21:51.208 "traddr": "192.168.100.8", 00:21:51.208 "trsvcid": "4420" 00:21:51.208 } 00:21:51.208 ], 00:21:51.208 "allow_any_host": true, 00:21:51.208 "hosts": [] 00:21:51.208 }, 00:21:51.208 { 00:21:51.208 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.209 "subtype": "NVMe", 00:21:51.209 "listen_addresses": [ 00:21:51.209 { 00:21:51.209 "trtype": "RDMA", 00:21:51.209 "adrfam": "IPv4", 00:21:51.209 "traddr": "192.168.100.8", 00:21:51.209 "trsvcid": "4420" 00:21:51.209 } 00:21:51.209 ], 00:21:51.209 "allow_any_host": true, 00:21:51.209 "hosts": [], 00:21:51.209 "serial_number": "SPDK00000000000001", 00:21:51.209 "model_number": "SPDK bdev Controller", 00:21:51.209 "max_namespaces": 32, 00:21:51.209 "min_cntlid": 1, 00:21:51.209 "max_cntlid": 65519, 00:21:51.209 "namespaces": [ 00:21:51.209 { 00:21:51.209 "nsid": 1, 00:21:51.209 "bdev_name": "Malloc0", 00:21:51.209 "name": "Malloc0", 00:21:51.209 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:51.209 "eui64": "ABCDEF0123456789", 00:21:51.209 "uuid": "5c826e9a-82f5-4170-b8c6-ece20975a55e" 00:21:51.209 } 00:21:51.209 ] 00:21:51.209 } 00:21:51.209 ] 00:21:51.209 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.209 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:51.209 [2024-07-15 13:52:17.698335] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:51.209 [2024-07-15 13:52:17.698379] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550360 ] 00:21:51.209 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.474 [2024-07-15 13:52:17.747888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:51.474 [2024-07-15 13:52:17.747977] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:51.474 [2024-07-15 13:52:17.747996] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:51.474 [2024-07-15 13:52:17.748001] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:51.474 [2024-07-15 13:52:17.748037] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:51.474 [2024-07-15 13:52:17.759113] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:51.474 [2024-07-15 13:52:17.769398] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:51.474 [2024-07-15 13:52:17.769410] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:51.474 [2024-07-15 13:52:17.769418] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769426] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769433] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769440] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769446] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769453] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769459] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769466] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769472] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769479] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769486] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769492] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769499] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769505] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769512] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769518] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769525] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769531] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769538] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769544] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769551] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769557] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769568] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769575] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769581] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769588] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769594] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769601] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769607] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769617] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769623] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769629] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:51.474 [2024-07-15 13:52:17.769635] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:51.474 [2024-07-15 13:52:17.769640] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:51.474 [2024-07-15 13:52:17.769665] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.769681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:21:51.474 [2024-07-15 13:52:17.774570] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.474 [2024-07-15 13:52:17.774582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.474 [2024-07-15 13:52:17.774591] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774600] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:51.474 [2024-07-15 13:52:17.774608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:51.474 [2024-07-15 13:52:17.774615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:51.474 [2024-07-15 13:52:17.774631] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.474 [2024-07-15 13:52:17.774664] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.474 [2024-07-15 13:52:17.774670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:51.474 [2024-07-15 13:52:17.774678] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:51.474 [2024-07-15 13:52:17.774685] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:51.474 [2024-07-15 13:52:17.774699] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.474 [2024-07-15 13:52:17.774729] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.474 [2024-07-15 13:52:17.774735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:51.474 [2024-07-15 13:52:17.774742] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:51.474 [2024-07-15 13:52:17.774748] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:51.474 [2024-07-15 13:52:17.774763] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.474 [2024-07-15 13:52:17.774789] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.474 [2024-07-15 13:52:17.774795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:51.474 [2024-07-15 13:52:17.774803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:51.474 [2024-07-15 13:52:17.774809] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774818] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.474 [2024-07-15 13:52:17.774851] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.474 [2024-07-15 13:52:17.774857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:51.474 [2024-07-15 13:52:17.774864] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:51.474 [2024-07-15 13:52:17.774871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:51.474 [2024-07-15 13:52:17.774877] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.774884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:51.474 [2024-07-15 13:52:17.774991] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:51.474 [2024-07-15 13:52:17.774997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:51.474 [2024-07-15 13:52:17.775008] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.474 [2024-07-15 13:52:17.775016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.474 [2024-07-15 13:52:17.775038] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:51.475 [2024-07-15 13:52:17.775057] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775065] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.475 [2024-07-15 13:52:17.775089] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775102] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:51.475 [2024-07-15 13:52:17.775108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775114] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775121] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:51.475 [2024-07-15 13:52:17.775133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775143] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:21:51.475 [2024-07-15 13:52:17.775194] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775209] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:51.475 [2024-07-15 13:52:17.775216] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:51.475 [2024-07-15 13:52:17.775222] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:51.475 [2024-07-15 13:52:17.775229] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:51.475 [2024-07-15 13:52:17.775236] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:51.475 [2024-07-15 13:52:17.775242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775248] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775264] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.475 [2024-07-15 13:52:17.775292] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775309] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.475 [2024-07-15 13:52:17.775323] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.475 [2024-07-15 13:52:17.775338] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.475 [2024-07-15 13:52:17.775352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.475 [2024-07-15 13:52:17.775366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775373] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:51.475 [2024-07-15 13:52:17.775392] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.475 [2024-07-15 13:52:17.775424] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775437] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:51.475 [2024-07-15 13:52:17.775446] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:51.475 [2024-07-15 13:52:17.775452] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775462] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:21:51.475 [2024-07-15 13:52:17.775494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775507] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775519] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:51.475 [2024-07-15 13:52:17.775547] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180100 00:21:51.475 [2024-07-15 13:52:17.775568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.475 [2024-07-15 13:52:17.775591] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775610] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180100 00:21:51.475 [2024-07-15 13:52:17.775624] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775631] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775643] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775649] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775668] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180100 00:21:51.475 [2024-07-15 13:52:17.775682] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.475 [2024-07-15 13:52:17.775704] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.475 [2024-07-15 13:52:17.775710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:51.475 [2024-07-15 13:52:17.775721] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.475 ===================================================== 00:21:51.475 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:51.475 ===================================================== 00:21:51.475 Controller Capabilities/Features 00:21:51.475 ================================ 00:21:51.475 Vendor ID: 0000 00:21:51.475 Subsystem Vendor ID: 0000 00:21:51.475 Serial Number: .................... 00:21:51.476 Model Number: ........................................ 00:21:51.476 Firmware Version: 24.09 00:21:51.476 Recommended Arb Burst: 0 00:21:51.476 IEEE OUI Identifier: 00 00 00 00:21:51.476 Multi-path I/O 00:21:51.476 May have multiple subsystem ports: No 00:21:51.476 May have multiple controllers: No 00:21:51.476 Associated with SR-IOV VF: No 00:21:51.476 Max Data Transfer Size: 131072 00:21:51.476 Max Number of Namespaces: 0 00:21:51.476 Max Number of I/O Queues: 1024 00:21:51.476 NVMe Specification Version (VS): 1.3 00:21:51.476 NVMe Specification Version (Identify): 1.3 00:21:51.476 Maximum Queue Entries: 128 00:21:51.476 Contiguous Queues Required: Yes 00:21:51.476 Arbitration Mechanisms Supported 00:21:51.476 Weighted Round Robin: Not Supported 00:21:51.476 Vendor Specific: Not Supported 00:21:51.476 Reset Timeout: 15000 ms 00:21:51.476 Doorbell Stride: 4 bytes 00:21:51.476 NVM Subsystem Reset: Not Supported 00:21:51.476 Command Sets Supported 00:21:51.476 NVM Command Set: Supported 00:21:51.476 Boot Partition: Not Supported 00:21:51.476 Memory Page Size Minimum: 4096 bytes 00:21:51.476 Memory Page Size Maximum: 4096 bytes 00:21:51.476 Persistent Memory Region: Not Supported 00:21:51.476 Optional Asynchronous Events Supported 00:21:51.476 Namespace Attribute Notices: Not Supported 00:21:51.476 Firmware Activation Notices: Not Supported 00:21:51.476 ANA Change Notices: Not Supported 00:21:51.476 PLE Aggregate Log Change Notices: Not Supported 00:21:51.476 LBA Status Info Alert Notices: Not Supported 00:21:51.476 EGE Aggregate Log Change Notices: Not Supported 00:21:51.476 Normal NVM Subsystem Shutdown event: Not Supported 00:21:51.476 Zone Descriptor Change Notices: Not Supported 00:21:51.476 Discovery Log Change Notices: Supported 00:21:51.476 Controller Attributes 00:21:51.476 128-bit Host Identifier: Not Supported 00:21:51.476 Non-Operational Permissive Mode: Not Supported 00:21:51.476 NVM Sets: Not Supported 00:21:51.476 Read Recovery Levels: Not Supported 00:21:51.476 Endurance Groups: Not Supported 00:21:51.476 Predictable Latency Mode: Not Supported 00:21:51.476 Traffic Based Keep ALive: Not Supported 00:21:51.476 Namespace Granularity: Not Supported 00:21:51.476 SQ Associations: Not Supported 00:21:51.476 UUID List: Not Supported 00:21:51.476 Multi-Domain Subsystem: Not Supported 00:21:51.476 Fixed Capacity Management: Not Supported 00:21:51.476 Variable Capacity Management: Not Supported 00:21:51.476 Delete Endurance Group: Not Supported 00:21:51.476 Delete NVM Set: Not Supported 00:21:51.476 Extended LBA Formats Supported: Not Supported 00:21:51.476 Flexible Data Placement Supported: Not Supported 00:21:51.476 00:21:51.476 Controller Memory Buffer Support 00:21:51.476 ================================ 00:21:51.476 Supported: No 00:21:51.476 00:21:51.476 Persistent Memory Region Support 00:21:51.476 ================================ 00:21:51.476 Supported: No 00:21:51.476 00:21:51.476 Admin Command Set Attributes 00:21:51.476 ============================ 00:21:51.476 Security Send/Receive: Not Supported 00:21:51.476 Format NVM: Not Supported 00:21:51.476 Firmware Activate/Download: Not Supported 00:21:51.476 Namespace Management: Not Supported 00:21:51.476 Device Self-Test: Not Supported 00:21:51.476 Directives: Not Supported 00:21:51.476 NVMe-MI: Not Supported 00:21:51.476 Virtualization Management: Not Supported 00:21:51.476 Doorbell Buffer Config: Not Supported 00:21:51.476 Get LBA Status Capability: Not Supported 00:21:51.476 Command & Feature Lockdown Capability: Not Supported 00:21:51.476 Abort Command Limit: 1 00:21:51.476 Async Event Request Limit: 4 00:21:51.476 Number of Firmware Slots: N/A 00:21:51.476 Firmware Slot 1 Read-Only: N/A 00:21:51.476 Firmware Activation Without Reset: N/A 00:21:51.476 Multiple Update Detection Support: N/A 00:21:51.476 Firmware Update Granularity: No Information Provided 00:21:51.476 Per-Namespace SMART Log: No 00:21:51.476 Asymmetric Namespace Access Log Page: Not Supported 00:21:51.476 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:51.476 Command Effects Log Page: Not Supported 00:21:51.476 Get Log Page Extended Data: Supported 00:21:51.476 Telemetry Log Pages: Not Supported 00:21:51.476 Persistent Event Log Pages: Not Supported 00:21:51.476 Supported Log Pages Log Page: May Support 00:21:51.476 Commands Supported & Effects Log Page: Not Supported 00:21:51.476 Feature Identifiers & Effects Log Page:May Support 00:21:51.476 NVMe-MI Commands & Effects Log Page: May Support 00:21:51.476 Data Area 4 for Telemetry Log: Not Supported 00:21:51.476 Error Log Page Entries Supported: 128 00:21:51.476 Keep Alive: Not Supported 00:21:51.476 00:21:51.476 NVM Command Set Attributes 00:21:51.476 ========================== 00:21:51.476 Submission Queue Entry Size 00:21:51.476 Max: 1 00:21:51.476 Min: 1 00:21:51.476 Completion Queue Entry Size 00:21:51.476 Max: 1 00:21:51.476 Min: 1 00:21:51.476 Number of Namespaces: 0 00:21:51.476 Compare Command: Not Supported 00:21:51.476 Write Uncorrectable Command: Not Supported 00:21:51.476 Dataset Management Command: Not Supported 00:21:51.476 Write Zeroes Command: Not Supported 00:21:51.476 Set Features Save Field: Not Supported 00:21:51.476 Reservations: Not Supported 00:21:51.476 Timestamp: Not Supported 00:21:51.476 Copy: Not Supported 00:21:51.476 Volatile Write Cache: Not Present 00:21:51.476 Atomic Write Unit (Normal): 1 00:21:51.476 Atomic Write Unit (PFail): 1 00:21:51.476 Atomic Compare & Write Unit: 1 00:21:51.476 Fused Compare & Write: Supported 00:21:51.476 Scatter-Gather List 00:21:51.476 SGL Command Set: Supported 00:21:51.476 SGL Keyed: Supported 00:21:51.476 SGL Bit Bucket Descriptor: Not Supported 00:21:51.476 SGL Metadata Pointer: Not Supported 00:21:51.476 Oversized SGL: Not Supported 00:21:51.476 SGL Metadata Address: Not Supported 00:21:51.476 SGL Offset: Supported 00:21:51.476 Transport SGL Data Block: Not Supported 00:21:51.476 Replay Protected Memory Block: Not Supported 00:21:51.476 00:21:51.476 Firmware Slot Information 00:21:51.476 ========================= 00:21:51.476 Active slot: 0 00:21:51.476 00:21:51.476 00:21:51.476 Error Log 00:21:51.476 ========= 00:21:51.476 00:21:51.476 Active Namespaces 00:21:51.476 ================= 00:21:51.476 Discovery Log Page 00:21:51.476 ================== 00:21:51.476 Generation Counter: 2 00:21:51.476 Number of Records: 2 00:21:51.476 Record Format: 0 00:21:51.476 00:21:51.476 Discovery Log Entry 0 00:21:51.476 ---------------------- 00:21:51.476 Transport Type: 1 (RDMA) 00:21:51.476 Address Family: 1 (IPv4) 00:21:51.476 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:51.476 Entry Flags: 00:21:51.476 Duplicate Returned Information: 1 00:21:51.476 Explicit Persistent Connection Support for Discovery: 1 00:21:51.476 Transport Requirements: 00:21:51.476 Secure Channel: Not Required 00:21:51.476 Port ID: 0 (0x0000) 00:21:51.476 Controller ID: 65535 (0xffff) 00:21:51.476 Admin Max SQ Size: 128 00:21:51.476 Transport Service Identifier: 4420 00:21:51.476 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:51.476 Transport Address: 192.168.100.8 00:21:51.476 Transport Specific Address Subtype - RDMA 00:21:51.476 RDMA QP Service Type: 1 (Reliable Connected) 00:21:51.476 RDMA Provider Type: 1 (No provider specified) 00:21:51.476 RDMA CM Service: 1 (RDMA_CM) 00:21:51.476 Discovery Log Entry 1 00:21:51.476 ---------------------- 00:21:51.476 Transport Type: 1 (RDMA) 00:21:51.476 Address Family: 1 (IPv4) 00:21:51.476 Subsystem Type: 2 (NVM Subsystem) 00:21:51.476 Entry Flags: 00:21:51.476 Duplicate Returned Information: 0 00:21:51.476 Explicit Persistent Connection Support for Discovery: 0 00:21:51.476 Transport Requirements: 00:21:51.476 Secure Channel: Not Required 00:21:51.476 Port ID: 0 (0x0000) 00:21:51.476 Controller ID: 65535 (0xffff) 00:21:51.476 Admin Max SQ Size: [2024-07-15 13:52:17.775804] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:51.476 [2024-07-15 13:52:17.775814] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 33939 doesn't match qid 00:21:51.476 [2024-07-15 13:52:17.775830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32538 cdw0:5 sqhd:aad0 p:0 m:0 dnr:0 00:21:51.476 [2024-07-15 13:52:17.775837] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 33939 doesn't match qid 00:21:51.476 [2024-07-15 13:52:17.775845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32538 cdw0:5 sqhd:aad0 p:0 m:0 dnr:0 00:21:51.476 [2024-07-15 13:52:17.775852] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 33939 doesn't match qid 00:21:51.476 [2024-07-15 13:52:17.775860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32538 cdw0:5 sqhd:aad0 p:0 m:0 dnr:0 00:21:51.476 [2024-07-15 13:52:17.775867] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 33939 doesn't match qid 00:21:51.476 [2024-07-15 13:52:17.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32538 cdw0:5 sqhd:aad0 p:0 m:0 dnr:0 00:21:51.476 [2024-07-15 13:52:17.775884] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:21:51.476 [2024-07-15 13:52:17.775893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.476 [2024-07-15 13:52:17.775915] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.775921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.775930] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.775938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.775945] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.775970] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.775976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.775983] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:51.477 [2024-07-15 13:52:17.775990] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:51.477 [2024-07-15 13:52:17.775996] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776007] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776043] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776056] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776066] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776096] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776109] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776118] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776146] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776158] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776167] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776194] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776206] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776215] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776243] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776256] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776265] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776291] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776303] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776314] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776340] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776353] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776362] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776392] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776404] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776413] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776437] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776450] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776459] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776483] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776495] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776505] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776533] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776545] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776554] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776590] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776603] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776613] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776644] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776656] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776665] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776690] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776702] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776711] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776737] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776750] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776759] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776789] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776811] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776835] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.477 [2024-07-15 13:52:17.776840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:51.477 [2024-07-15 13:52:17.776847] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776856] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.477 [2024-07-15 13:52:17.776864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.477 [2024-07-15 13:52:17.776888] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.776894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.776902] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.776911] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.776919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.776938] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.776943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.776950] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.776959] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.776967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.776983] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.776989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.776996] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777005] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777031] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777043] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777052] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777080] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777093] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777102] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777128] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777140] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777149] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777176] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777189] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777199] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777221] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777233] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777242] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777272] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777285] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777294] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777320] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777332] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777341] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777371] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777384] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777393] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777421] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777433] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777443] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777471] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:21:51.478 [2024-07-15 13:52:17.777484] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777493] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.478 [2024-07-15 13:52:17.777501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.478 [2024-07-15 13:52:17.777524] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.478 [2024-07-15 13:52:17.777530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777536] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777545] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777576] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777588] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777597] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777622] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777643] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777671] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777684] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777693] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777718] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777731] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777740] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777762] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777776] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777785] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777814] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777826] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777835] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777859] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777872] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777881] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777911] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777938] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.777964] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.777971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.777979] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.777990] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778015] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778031] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778074] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778088] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778098] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778127] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778140] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778149] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778173] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778185] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778194] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778219] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778231] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778240] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778264] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778276] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778285] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778312] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778324] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778333] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778357] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778369] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778378] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778401] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778413] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778422] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.479 [2024-07-15 13:52:17.778430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.479 [2024-07-15 13:52:17.778452] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.479 [2024-07-15 13:52:17.778458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:51.479 [2024-07-15 13:52:17.778465] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.778474] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.778482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.778500] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.778514] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.778523] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.778532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.778555] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.778563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.782578] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.782587] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.782595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.782618] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.782624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.782631] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.782638] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:51.480 128 00:21:51.480 Transport Service Identifier: 4420 00:21:51.480 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:51.480 Transport Address: 192.168.100.8 00:21:51.480 Transport Specific Address Subtype - RDMA 00:21:51.480 RDMA QP Service Type: 1 (Reliable Connected) 00:21:51.480 RDMA Provider Type: 1 (No provider specified) 00:21:51.480 RDMA CM Service: 1 (RDMA_CM) 00:21:51.480 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:51.480 [2024-07-15 13:52:17.860447] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:51.480 [2024-07-15 13:52:17.860489] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550366 ] 00:21:51.480 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.480 [2024-07-15 13:52:17.906769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:51.480 [2024-07-15 13:52:17.906842] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:21:51.480 [2024-07-15 13:52:17.906863] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:21:51.480 [2024-07-15 13:52:17.906868] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:21:51.480 [2024-07-15 13:52:17.906893] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:51.480 [2024-07-15 13:52:17.918098] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:21:51.480 [2024-07-15 13:52:17.932368] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:51.480 [2024-07-15 13:52:17.932379] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:21:51.480 [2024-07-15 13:52:17.932387] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932394] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932401] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932408] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932414] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932421] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932427] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932433] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932440] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932446] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932453] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932459] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932466] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932475] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932481] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932488] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932494] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932501] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932507] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932514] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932520] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932526] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932533] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932539] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932546] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932552] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932558] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932568] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932574] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932581] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932587] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932593] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:21:51.480 [2024-07-15 13:52:17.932599] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:21:51.480 [2024-07-15 13:52:17.932603] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:21:51.480 [2024-07-15 13:52:17.932620] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.932632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:21:51.480 [2024-07-15 13:52:17.937569] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.937579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.937587] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937595] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:51.480 [2024-07-15 13:52:17.937602] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:51.480 [2024-07-15 13:52:17.937609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:51.480 [2024-07-15 13:52:17.937623] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.937655] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.937663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.937670] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:51.480 [2024-07-15 13:52:17.937676] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:51.480 [2024-07-15 13:52:17.937691] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.937719] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.937725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.937732] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:51.480 [2024-07-15 13:52:17.937738] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:51.480 [2024-07-15 13:52:17.937754] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.937784] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.480 [2024-07-15 13:52:17.937789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:51.480 [2024-07-15 13:52:17.937796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:51.480 [2024-07-15 13:52:17.937803] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937811] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.480 [2024-07-15 13:52:17.937819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.480 [2024-07-15 13:52:17.937837] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.937843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.937849] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:51.481 [2024-07-15 13:52:17.937855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:51.481 [2024-07-15 13:52:17.937862] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.937869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:51.481 [2024-07-15 13:52:17.937975] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:51.481 [2024-07-15 13:52:17.937981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:51.481 [2024-07-15 13:52:17.937990] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.937999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.481 [2024-07-15 13:52:17.938018] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:51.481 [2024-07-15 13:52:17.938036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.481 [2024-07-15 13:52:17.938068] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938080] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:51.481 [2024-07-15 13:52:17.938087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938093] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938100] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:51.481 [2024-07-15 13:52:17.938111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938121] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:21:51.481 [2024-07-15 13:52:17.938173] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938188] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:51.481 [2024-07-15 13:52:17.938194] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:51.481 [2024-07-15 13:52:17.938200] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:51.481 [2024-07-15 13:52:17.938206] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:51.481 [2024-07-15 13:52:17.938212] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:51.481 [2024-07-15 13:52:17.938218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938224] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938240] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.481 [2024-07-15 13:52:17.938273] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.481 [2024-07-15 13:52:17.938303] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.481 [2024-07-15 13:52:17.938317] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.481 [2024-07-15 13:52:17.938331] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.481 [2024-07-15 13:52:17.938344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938350] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938369] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.481 [2024-07-15 13:52:17.938393] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938406] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:51.481 [2024-07-15 13:52:17.938414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938421] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.481 [2024-07-15 13:52:17.938475] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938539] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938556] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180100 00:21:51.481 [2024-07-15 13:52:17.938591] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938608] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:51.481 [2024-07-15 13:52:17.938623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938629] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938646] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:21:51.481 [2024-07-15 13:52:17.938692] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938718] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938735] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:21:51.481 [2024-07-15 13:52:17.938765] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.481 [2024-07-15 13:52:17.938771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:51.481 [2024-07-15 13:52:17.938780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938786] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.481 [2024-07-15 13:52:17.938793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:51.481 [2024-07-15 13:52:17.938822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:51.482 [2024-07-15 13:52:17.938829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:51.482 [2024-07-15 13:52:17.938835] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:51.482 [2024-07-15 13:52:17.938841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:51.482 [2024-07-15 13:52:17.938848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:51.482 [2024-07-15 13:52:17.938863] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.482 [2024-07-15 13:52:17.938879] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.482 [2024-07-15 13:52:17.938897] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.938903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.938910] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938916] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.938922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.938928] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938937] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.482 [2024-07-15 13:52:17.938966] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.938971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.938978] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938987] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.938995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.482 [2024-07-15 13:52:17.939015] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939027] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939036] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.482 [2024-07-15 13:52:17.939058] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939072] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939086] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180100 00:21:51.482 [2024-07-15 13:52:17.939103] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180100 00:21:51.482 [2024-07-15 13:52:17.939119] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180100 00:21:51.482 [2024-07-15 13:52:17.939138] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180100 00:21:51.482 [2024-07-15 13:52:17.939154] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939173] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939180] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939196] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939203] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939216] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.482 [2024-07-15 13:52:17.939222] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.482 [2024-07-15 13:52:17.939228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:51.482 [2024-07-15 13:52:17.939237] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.482 ===================================================== 00:21:51.482 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.482 ===================================================== 00:21:51.482 Controller Capabilities/Features 00:21:51.482 ================================ 00:21:51.482 Vendor ID: 8086 00:21:51.482 Subsystem Vendor ID: 8086 00:21:51.482 Serial Number: SPDK00000000000001 00:21:51.482 Model Number: SPDK bdev Controller 00:21:51.482 Firmware Version: 24.09 00:21:51.482 Recommended Arb Burst: 6 00:21:51.482 IEEE OUI Identifier: e4 d2 5c 00:21:51.482 Multi-path I/O 00:21:51.482 May have multiple subsystem ports: Yes 00:21:51.482 May have multiple controllers: Yes 00:21:51.482 Associated with SR-IOV VF: No 00:21:51.482 Max Data Transfer Size: 131072 00:21:51.482 Max Number of Namespaces: 32 00:21:51.482 Max Number of I/O Queues: 127 00:21:51.482 NVMe Specification Version (VS): 1.3 00:21:51.482 NVMe Specification Version (Identify): 1.3 00:21:51.482 Maximum Queue Entries: 128 00:21:51.482 Contiguous Queues Required: Yes 00:21:51.482 Arbitration Mechanisms Supported 00:21:51.482 Weighted Round Robin: Not Supported 00:21:51.482 Vendor Specific: Not Supported 00:21:51.482 Reset Timeout: 15000 ms 00:21:51.482 Doorbell Stride: 4 bytes 00:21:51.482 NVM Subsystem Reset: Not Supported 00:21:51.482 Command Sets Supported 00:21:51.482 NVM Command Set: Supported 00:21:51.482 Boot Partition: Not Supported 00:21:51.482 Memory Page Size Minimum: 4096 bytes 00:21:51.482 Memory Page Size Maximum: 4096 bytes 00:21:51.482 Persistent Memory Region: Not Supported 00:21:51.482 Optional Asynchronous Events Supported 00:21:51.482 Namespace Attribute Notices: Supported 00:21:51.482 Firmware Activation Notices: Not Supported 00:21:51.482 ANA Change Notices: Not Supported 00:21:51.482 PLE Aggregate Log Change Notices: Not Supported 00:21:51.482 LBA Status Info Alert Notices: Not Supported 00:21:51.482 EGE Aggregate Log Change Notices: Not Supported 00:21:51.482 Normal NVM Subsystem Shutdown event: Not Supported 00:21:51.482 Zone Descriptor Change Notices: Not Supported 00:21:51.482 Discovery Log Change Notices: Not Supported 00:21:51.482 Controller Attributes 00:21:51.482 128-bit Host Identifier: Supported 00:21:51.482 Non-Operational Permissive Mode: Not Supported 00:21:51.482 NVM Sets: Not Supported 00:21:51.482 Read Recovery Levels: Not Supported 00:21:51.482 Endurance Groups: Not Supported 00:21:51.482 Predictable Latency Mode: Not Supported 00:21:51.482 Traffic Based Keep ALive: Not Supported 00:21:51.482 Namespace Granularity: Not Supported 00:21:51.482 SQ Associations: Not Supported 00:21:51.482 UUID List: Not Supported 00:21:51.482 Multi-Domain Subsystem: Not Supported 00:21:51.482 Fixed Capacity Management: Not Supported 00:21:51.482 Variable Capacity Management: Not Supported 00:21:51.482 Delete Endurance Group: Not Supported 00:21:51.482 Delete NVM Set: Not Supported 00:21:51.482 Extended LBA Formats Supported: Not Supported 00:21:51.482 Flexible Data Placement Supported: Not Supported 00:21:51.482 00:21:51.482 Controller Memory Buffer Support 00:21:51.482 ================================ 00:21:51.482 Supported: No 00:21:51.482 00:21:51.482 Persistent Memory Region Support 00:21:51.482 ================================ 00:21:51.482 Supported: No 00:21:51.482 00:21:51.482 Admin Command Set Attributes 00:21:51.482 ============================ 00:21:51.482 Security Send/Receive: Not Supported 00:21:51.482 Format NVM: Not Supported 00:21:51.482 Firmware Activate/Download: Not Supported 00:21:51.483 Namespace Management: Not Supported 00:21:51.483 Device Self-Test: Not Supported 00:21:51.483 Directives: Not Supported 00:21:51.483 NVMe-MI: Not Supported 00:21:51.483 Virtualization Management: Not Supported 00:21:51.483 Doorbell Buffer Config: Not Supported 00:21:51.483 Get LBA Status Capability: Not Supported 00:21:51.483 Command & Feature Lockdown Capability: Not Supported 00:21:51.483 Abort Command Limit: 4 00:21:51.483 Async Event Request Limit: 4 00:21:51.483 Number of Firmware Slots: N/A 00:21:51.483 Firmware Slot 1 Read-Only: N/A 00:21:51.483 Firmware Activation Without Reset: N/A 00:21:51.483 Multiple Update Detection Support: N/A 00:21:51.483 Firmware Update Granularity: No Information Provided 00:21:51.483 Per-Namespace SMART Log: No 00:21:51.483 Asymmetric Namespace Access Log Page: Not Supported 00:21:51.483 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:51.483 Command Effects Log Page: Supported 00:21:51.483 Get Log Page Extended Data: Supported 00:21:51.483 Telemetry Log Pages: Not Supported 00:21:51.483 Persistent Event Log Pages: Not Supported 00:21:51.483 Supported Log Pages Log Page: May Support 00:21:51.483 Commands Supported & Effects Log Page: Not Supported 00:21:51.483 Feature Identifiers & Effects Log Page:May Support 00:21:51.483 NVMe-MI Commands & Effects Log Page: May Support 00:21:51.483 Data Area 4 for Telemetry Log: Not Supported 00:21:51.483 Error Log Page Entries Supported: 128 00:21:51.483 Keep Alive: Supported 00:21:51.483 Keep Alive Granularity: 10000 ms 00:21:51.483 00:21:51.483 NVM Command Set Attributes 00:21:51.483 ========================== 00:21:51.483 Submission Queue Entry Size 00:21:51.483 Max: 64 00:21:51.483 Min: 64 00:21:51.483 Completion Queue Entry Size 00:21:51.483 Max: 16 00:21:51.483 Min: 16 00:21:51.483 Number of Namespaces: 32 00:21:51.483 Compare Command: Supported 00:21:51.483 Write Uncorrectable Command: Not Supported 00:21:51.483 Dataset Management Command: Supported 00:21:51.483 Write Zeroes Command: Supported 00:21:51.483 Set Features Save Field: Not Supported 00:21:51.483 Reservations: Supported 00:21:51.483 Timestamp: Not Supported 00:21:51.483 Copy: Supported 00:21:51.483 Volatile Write Cache: Present 00:21:51.483 Atomic Write Unit (Normal): 1 00:21:51.483 Atomic Write Unit (PFail): 1 00:21:51.483 Atomic Compare & Write Unit: 1 00:21:51.483 Fused Compare & Write: Supported 00:21:51.483 Scatter-Gather List 00:21:51.483 SGL Command Set: Supported 00:21:51.483 SGL Keyed: Supported 00:21:51.483 SGL Bit Bucket Descriptor: Not Supported 00:21:51.483 SGL Metadata Pointer: Not Supported 00:21:51.483 Oversized SGL: Not Supported 00:21:51.483 SGL Metadata Address: Not Supported 00:21:51.483 SGL Offset: Supported 00:21:51.483 Transport SGL Data Block: Not Supported 00:21:51.483 Replay Protected Memory Block: Not Supported 00:21:51.483 00:21:51.483 Firmware Slot Information 00:21:51.483 ========================= 00:21:51.483 Active slot: 1 00:21:51.483 Slot 1 Firmware Revision: 24.09 00:21:51.483 00:21:51.483 00:21:51.483 Commands Supported and Effects 00:21:51.483 ============================== 00:21:51.483 Admin Commands 00:21:51.483 -------------- 00:21:51.483 Get Log Page (02h): Supported 00:21:51.483 Identify (06h): Supported 00:21:51.483 Abort (08h): Supported 00:21:51.483 Set Features (09h): Supported 00:21:51.483 Get Features (0Ah): Supported 00:21:51.483 Asynchronous Event Request (0Ch): Supported 00:21:51.483 Keep Alive (18h): Supported 00:21:51.483 I/O Commands 00:21:51.483 ------------ 00:21:51.483 Flush (00h): Supported LBA-Change 00:21:51.483 Write (01h): Supported LBA-Change 00:21:51.483 Read (02h): Supported 00:21:51.483 Compare (05h): Supported 00:21:51.483 Write Zeroes (08h): Supported LBA-Change 00:21:51.483 Dataset Management (09h): Supported LBA-Change 00:21:51.483 Copy (19h): Supported LBA-Change 00:21:51.483 00:21:51.483 Error Log 00:21:51.483 ========= 00:21:51.483 00:21:51.483 Arbitration 00:21:51.483 =========== 00:21:51.483 Arbitration Burst: 1 00:21:51.483 00:21:51.483 Power Management 00:21:51.483 ================ 00:21:51.483 Number of Power States: 1 00:21:51.483 Current Power State: Power State #0 00:21:51.483 Power State #0: 00:21:51.483 Max Power: 0.00 W 00:21:51.483 Non-Operational State: Operational 00:21:51.483 Entry Latency: Not Reported 00:21:51.483 Exit Latency: Not Reported 00:21:51.483 Relative Read Throughput: 0 00:21:51.483 Relative Read Latency: 0 00:21:51.483 Relative Write Throughput: 0 00:21:51.483 Relative Write Latency: 0 00:21:51.483 Idle Power: Not Reported 00:21:51.483 Active Power: Not Reported 00:21:51.483 Non-Operational Permissive Mode: Not Supported 00:21:51.483 00:21:51.483 Health Information 00:21:51.483 ================== 00:21:51.483 Critical Warnings: 00:21:51.483 Available Spare Space: OK 00:21:51.483 Temperature: OK 00:21:51.483 Device Reliability: OK 00:21:51.483 Read Only: No 00:21:51.483 Volatile Memory Backup: OK 00:21:51.483 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:51.483 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:51.483 Available Spare: 0% 00:21:51.483 Available Spare Threshold: 0% 00:21:51.483 Life Percentage [2024-07-15 13:52:17.939319] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.483 [2024-07-15 13:52:17.939346] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.483 [2024-07-15 13:52:17.939352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939358] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939391] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:51.483 [2024-07-15 13:52:17.939401] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15904 doesn't match qid 00:21:51.483 [2024-07-15 13:52:17.939414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:7ad0 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939421] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15904 doesn't match qid 00:21:51.483 [2024-07-15 13:52:17.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:7ad0 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939436] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15904 doesn't match qid 00:21:51.483 [2024-07-15 13:52:17.939444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:7ad0 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939451] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 15904 doesn't match qid 00:21:51.483 [2024-07-15 13:52:17.939459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32746 cdw0:5 sqhd:7ad0 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939469] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.483 [2024-07-15 13:52:17.939496] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.483 [2024-07-15 13:52:17.939502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939511] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.483 [2024-07-15 13:52:17.939525] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939546] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.483 [2024-07-15 13:52:17.939552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939558] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:51.483 [2024-07-15 13:52:17.939570] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:51.483 [2024-07-15 13:52:17.939576] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939585] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.483 [2024-07-15 13:52:17.939613] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.483 [2024-07-15 13:52:17.939619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:51.483 [2024-07-15 13:52:17.939626] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939636] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.483 [2024-07-15 13:52:17.939643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.483 [2024-07-15 13:52:17.939658] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939672] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939681] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939711] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939723] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939733] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939764] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939777] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939786] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939818] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939830] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939839] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939863] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939875] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939884] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939914] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939936] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.939962] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.939967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.939976] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939985] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.939993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940013] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940025] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940034] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940068] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940080] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940089] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940119] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940131] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940140] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940164] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940176] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940186] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940209] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940222] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940231] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940259] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940274] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940284] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940311] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940324] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940333] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940365] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940377] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940386] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940412] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940424] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940433] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940461] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940473] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940482] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940506] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940518] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940528] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940559] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940575] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940584] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.484 [2024-07-15 13:52:17.940592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.484 [2024-07-15 13:52:17.940614] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.484 [2024-07-15 13:52:17.940620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:21:51.484 [2024-07-15 13:52:17.940626] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940636] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940665] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940678] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940713] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940725] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940734] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940762] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940775] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940784] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940811] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940824] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940833] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940856] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940869] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940878] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940910] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940922] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940931] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.940959] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.940965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.940972] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940981] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.940988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941011] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941023] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941032] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941062] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941074] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941083] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941109] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941121] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941131] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941156] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941169] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941178] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941204] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941216] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941225] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941251] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941263] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941272] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941304] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941316] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941326] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941349] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941362] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941371] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941399] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941411] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941420] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941445] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941458] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941467] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941493] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941505] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941514] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.941522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.485 [2024-07-15 13:52:17.941544] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.485 [2024-07-15 13:52:17.941550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:21:51.485 [2024-07-15 13:52:17.941556] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.945571] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:21:51.485 [2024-07-15 13:52:17.945581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:21:51.486 [2024-07-15 13:52:17.945598] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:21:51.486 [2024-07-15 13:52:17.945603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0003 p:0 m:0 dnr:0 00:21:51.486 [2024-07-15 13:52:17.945610] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:21:51.486 [2024-07-15 13:52:17.945617] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:21:51.486 Used: 0% 00:21:51.486 Data Units Read: 0 00:21:51.486 Data Units Written: 0 00:21:51.486 Host Read Commands: 0 00:21:51.486 Host Write Commands: 0 00:21:51.486 Controller Busy Time: 0 minutes 00:21:51.486 Power Cycles: 0 00:21:51.486 Power On Hours: 0 hours 00:21:51.486 Unsafe Shutdowns: 0 00:21:51.486 Unrecoverable Media Errors: 0 00:21:51.486 Lifetime Error Log Entries: 0 00:21:51.486 Warning Temperature Time: 0 minutes 00:21:51.486 Critical Temperature Time: 0 minutes 00:21:51.486 00:21:51.486 Number of Queues 00:21:51.486 ================ 00:21:51.486 Number of I/O Submission Queues: 127 00:21:51.486 Number of I/O Completion Queues: 127 00:21:51.486 00:21:51.486 Active Namespaces 00:21:51.486 ================= 00:21:51.486 Namespace ID:1 00:21:51.486 Error Recovery Timeout: Unlimited 00:21:51.486 Command Set Identifier: NVM (00h) 00:21:51.486 Deallocate: Supported 00:21:51.486 Deallocated/Unwritten Error: Not Supported 00:21:51.486 Deallocated Read Value: Unknown 00:21:51.486 Deallocate in Write Zeroes: Not Supported 00:21:51.486 Deallocated Guard Field: 0xFFFF 00:21:51.486 Flush: Supported 00:21:51.486 Reservation: Supported 00:21:51.486 Namespace Sharing Capabilities: Multiple Controllers 00:21:51.486 Size (in LBAs): 131072 (0GiB) 00:21:51.486 Capacity (in LBAs): 131072 (0GiB) 00:21:51.486 Utilization (in LBAs): 131072 (0GiB) 00:21:51.486 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:51.486 EUI64: ABCDEF0123456789 00:21:51.486 UUID: 5c826e9a-82f5-4170-b8c6-ece20975a55e 00:21:51.486 Thin Provisioning: Not Supported 00:21:51.486 Per-NS Atomic Units: Yes 00:21:51.486 Atomic Boundary Size (Normal): 0 00:21:51.486 Atomic Boundary Size (PFail): 0 00:21:51.486 Atomic Boundary Offset: 0 00:21:51.486 Maximum Single Source Range Length: 65535 00:21:51.486 Maximum Copy Length: 65535 00:21:51.486 Maximum Source Range Count: 1 00:21:51.486 NGUID/EUI64 Never Reused: No 00:21:51.486 Namespace Write Protected: No 00:21:51.486 Number of LBA Formats: 1 00:21:51.486 Current LBA Format: LBA Format #00 00:21:51.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:51.486 00:21:51.486 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:51.744 13:52:17 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.744 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.744 13:52:17 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:51.744 rmmod nvme_rdma 00:21:51.744 rmmod nvme_fabrics 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2550161 ']' 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2550161 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2550161 ']' 00:21:51.744 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2550161 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2550161 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2550161' 00:21:51.745 killing process with pid 2550161 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2550161 00:21:51.745 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2550161 00:21:52.004 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.004 13:52:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:52.004 00:21:52.004 real 0m8.877s 00:21:52.004 user 0m8.401s 00:21:52.004 sys 0m5.805s 00:21:52.004 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.004 13:52:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.004 ************************************ 00:21:52.004 END TEST nvmf_identify 00:21:52.004 ************************************ 00:21:52.004 13:52:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:52.004 13:52:18 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:21:52.004 13:52:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:52.004 13:52:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.004 13:52:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:52.004 ************************************ 00:21:52.004 START TEST nvmf_perf 00:21:52.004 ************************************ 00:21:52.004 13:52:18 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:21:52.263 * Looking for test storage... 00:21:52.263 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.263 13:52:18 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:58.927 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:58.927 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.927 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:58.928 Found net devices under 0000:18:00.0: mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:58.928 Found net devices under 0000:18:00.1: mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:58.928 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.928 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:21:58.928 altname enp24s0f0np0 00:21:58.928 altname ens785f0np0 00:21:58.928 inet 192.168.100.8/24 scope global mlx_0_0 00:21:58.928 valid_lft forever preferred_lft forever 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:58.928 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.928 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:21:58.928 altname enp24s0f1np1 00:21:58.928 altname ens785f1np1 00:21:58.928 inet 192.168.100.9/24 scope global mlx_0_1 00:21:58.928 valid_lft forever preferred_lft forever 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:58.928 192.168.100.9' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:58.928 192.168.100.9' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:58.928 192.168.100.9' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2553271 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2553271 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2553271 ']' 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.928 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.929 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.929 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.929 13:52:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:59.187 [2024-07-15 13:52:25.481578] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:21:59.187 [2024-07-15 13:52:25.481645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.187 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.187 [2024-07-15 13:52:25.554952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.187 [2024-07-15 13:52:25.640590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.187 [2024-07-15 13:52:25.640628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.187 [2024-07-15 13:52:25.640638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.187 [2024-07-15 13:52:25.640647] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.187 [2024-07-15 13:52:25.640655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.187 [2024-07-15 13:52:25.640717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.187 [2024-07-15 13:52:25.640743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.187 [2024-07-15 13:52:25.640768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.187 [2024-07-15 13:52:25.640769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:00.121 13:52:26 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:03.404 13:52:29 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:03.662 [2024-07-15 13:52:29.967008] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:03.662 [2024-07-15 13:52:29.987441] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12b6720/0x12c4200) succeed. 00:22:03.662 [2024-07-15 13:52:29.997155] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12b7d60/0x1344240) succeed. 00:22:03.662 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.921 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:03.921 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.179 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:04.179 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:04.179 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:04.437 [2024-07-15 13:52:30.831722] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:04.437 13:52:30 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:04.696 13:52:31 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:22:04.696 13:52:31 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:22:04.696 13:52:31 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:04.696 13:52:31 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:22:06.070 Initializing NVMe Controllers 00:22:06.070 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:22:06.070 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:22:06.070 Initialization complete. Launching workers. 00:22:06.070 ======================================================== 00:22:06.070 Latency(us) 00:22:06.070 Device Information : IOPS MiB/s Average min max 00:22:06.070 PCIE (0000:5f:00.0) NSID 1 from core 0: 98077.48 383.12 325.83 34.92 4400.44 00:22:06.070 ======================================================== 00:22:06.070 Total : 98077.48 383.12 325.83 34.92 4400.44 00:22:06.070 00:22:06.070 13:52:32 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:06.070 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.350 Initializing NVMe Controllers 00:22:09.350 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.350 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:09.350 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:09.350 Initialization complete. Launching workers. 00:22:09.350 ======================================================== 00:22:09.350 Latency(us) 00:22:09.350 Device Information : IOPS MiB/s Average min max 00:22:09.350 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6761.99 26.41 147.08 45.34 4169.54 00:22:09.350 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5259.99 20.55 189.91 68.06 4247.63 00:22:09.350 ======================================================== 00:22:09.350 Total : 12021.99 46.96 165.82 45.34 4247.63 00:22:09.350 00:22:09.350 13:52:35 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:09.350 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.652 Initializing NVMe Controllers 00:22:12.652 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.652 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.652 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:12.652 Initialization complete. Launching workers. 00:22:12.652 ======================================================== 00:22:12.652 Latency(us) 00:22:12.652 Device Information : IOPS MiB/s Average min max 00:22:12.652 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18212.00 71.14 1757.40 505.68 5520.28 00:22:12.652 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7978.99 7725.37 8201.61 00:22:12.652 ======================================================== 00:22:12.652 Total : 22244.00 86.89 2885.14 505.68 8201.61 00:22:12.652 00:22:12.652 13:52:39 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:22:12.652 13:52:39 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:12.652 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.919 Initializing NVMe Controllers 00:22:17.919 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.919 Controller IO queue size 128, less than required. 00:22:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.919 Controller IO queue size 128, less than required. 00:22:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.919 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.919 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.919 Initialization complete. Launching workers. 00:22:17.919 ======================================================== 00:22:17.919 Latency(us) 00:22:17.919 Device Information : IOPS MiB/s Average min max 00:22:17.919 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3870.70 967.68 33066.00 15468.38 74729.44 00:22:17.919 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3991.47 997.87 31847.07 15347.72 58225.23 00:22:17.919 ======================================================== 00:22:17.919 Total : 7862.18 1965.54 32447.18 15347.72 74729.44 00:22:17.919 00:22:17.919 13:52:43 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:22:17.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.919 No valid NVMe controllers or AIO or URING devices found 00:22:17.919 Initializing NVMe Controllers 00:22:17.919 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.919 Controller IO queue size 128, less than required. 00:22:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:17.919 Controller IO queue size 128, less than required. 00:22:17.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.919 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:17.919 WARNING: Some requested NVMe devices were skipped 00:22:17.919 13:52:43 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:22:17.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.105 Initializing NVMe Controllers 00:22:22.105 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.105 Controller IO queue size 128, less than required. 00:22:22.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.105 Controller IO queue size 128, less than required. 00:22:22.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.105 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.105 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.105 Initialization complete. Launching workers. 00:22:22.105 00:22:22.105 ==================== 00:22:22.105 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:22.105 RDMA transport: 00:22:22.105 dev name: mlx5_0 00:22:22.105 polls: 397576 00:22:22.105 idle_polls: 394489 00:22:22.105 completions: 42566 00:22:22.105 queued_requests: 1 00:22:22.105 total_send_wrs: 21283 00:22:22.105 send_doorbell_updates: 2812 00:22:22.105 total_recv_wrs: 21410 00:22:22.105 recv_doorbell_updates: 2815 00:22:22.105 --------------------------------- 00:22:22.105 00:22:22.105 ==================== 00:22:22.105 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:22.105 RDMA transport: 00:22:22.105 dev name: mlx5_0 00:22:22.105 polls: 402067 00:22:22.105 idle_polls: 401803 00:22:22.105 completions: 19806 00:22:22.105 queued_requests: 1 00:22:22.105 total_send_wrs: 9903 00:22:22.105 send_doorbell_updates: 251 00:22:22.105 total_recv_wrs: 10030 00:22:22.105 recv_doorbell_updates: 252 00:22:22.105 --------------------------------- 00:22:22.105 ======================================================== 00:22:22.105 Latency(us) 00:22:22.105 Device Information : IOPS MiB/s Average min max 00:22:22.105 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5311.52 1327.88 24106.85 11700.71 59198.98 00:22:22.105 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2471.32 617.83 51835.61 30486.67 80005.29 00:22:22.105 ======================================================== 00:22:22.105 Total : 7782.85 1945.71 32911.69 11700.71 80005.29 00:22:22.105 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:22.105 rmmod nvme_rdma 00:22:22.105 rmmod nvme_fabrics 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2553271 ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2553271 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2553271 ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2553271 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2553271 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2553271' 00:22:22.105 killing process with pid 2553271 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2553271 00:22:22.105 13:52:48 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2553271 00:22:30.210 13:52:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.210 13:52:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:30.210 00:22:30.210 real 0m37.207s 00:22:30.210 user 2m2.377s 00:22:30.210 sys 0m6.539s 00:22:30.210 13:52:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.210 13:52:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:30.210 ************************************ 00:22:30.210 END TEST nvmf_perf 00:22:30.210 ************************************ 00:22:30.210 13:52:55 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:30.210 13:52:55 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:30.210 13:52:55 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:30.210 13:52:55 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.210 13:52:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:30.210 ************************************ 00:22:30.210 START TEST nvmf_fio_host 00:22:30.210 ************************************ 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:22:30.210 * Looking for test storage... 00:22:30.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:30.210 13:52:55 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.211 13:52:55 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:36.836 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:36.836 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:36.836 Found net devices under 0000:18:00.0: mlx_0_0 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:36.836 Found net devices under 0000:18:00.1: mlx_0_1 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.836 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:36.837 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.837 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:22:36.837 altname enp24s0f0np0 00:22:36.837 altname ens785f0np0 00:22:36.837 inet 192.168.100.8/24 scope global mlx_0_0 00:22:36.837 valid_lft forever preferred_lft forever 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:36.837 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.837 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:22:36.837 altname enp24s0f1np1 00:22:36.837 altname ens785f1np1 00:22:36.837 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.837 valid_lft forever preferred_lft forever 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.837 192.168.100.9' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:36.837 192.168.100.9' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:36.837 192.168.100.9' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2560195 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2560195 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2560195 ']' 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:36.837 13:53:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.837 [2024-07-15 13:53:02.765493] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:36.837 [2024-07-15 13:53:02.765559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.837 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.837 [2024-07-15 13:53:02.859061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.837 [2024-07-15 13:53:02.951325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.837 [2024-07-15 13:53:02.951367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.837 [2024-07-15 13:53:02.951376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.837 [2024-07-15 13:53:02.951384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.837 [2024-07-15 13:53:02.951391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.837 [2024-07-15 13:53:02.951471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.837 [2024-07-15 13:53:02.951591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.837 [2024-07-15 13:53:02.951667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.837 [2024-07-15 13:53:02.951667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.095 13:53:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.095 13:53:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:37.095 13:53:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:37.353 [2024-07-15 13:53:03.781386] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11e1180/0x11e5670) succeed. 00:22:37.353 [2024-07-15 13:53:03.790887] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11e27c0/0x1226d00) succeed. 00:22:37.611 13:53:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:37.611 13:53:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.611 13:53:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.611 13:53:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:37.869 Malloc1 00:22:37.869 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.869 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:38.127 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:38.385 [2024-07-15 13:53:04.708044] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:38.385 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:38.652 13:53:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:22:38.909 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:38.909 fio-3.35 00:22:38.909 Starting 1 thread 00:22:38.909 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.429 00:22:41.429 test: (groupid=0, jobs=1): err= 0: pid=2560678: Mon Jul 15 13:53:07 2024 00:22:41.429 read: IOPS=17.7k, BW=69.0MiB/s (72.3MB/s)(138MiB/2004msec) 00:22:41.429 slat (nsec): min=1399, max=34339, avg=1552.87, stdev=427.18 00:22:41.429 clat (usec): min=1777, max=6800, avg=3600.16, stdev=89.93 00:22:41.429 lat (usec): min=1795, max=6802, avg=3601.72, stdev=89.84 00:22:41.429 clat percentiles (usec): 00:22:41.429 | 1.00th=[ 3556], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3589], 00:22:41.429 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:22:41.429 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:22:41.429 | 99.00th=[ 3687], 99.50th=[ 3687], 99.90th=[ 5014], 99.95th=[ 5866], 00:22:41.429 | 99.99th=[ 6390] 00:22:41.429 bw ( KiB/s): min=69160, max=71200, per=100.00%, avg=70618.00, stdev=976.55, samples=4 00:22:41.429 iops : min=17290, max=17800, avg=17654.50, stdev=244.14, samples=4 00:22:41.429 write: IOPS=17.7k, BW=68.9MiB/s (72.3MB/s)(138MiB/2004msec); 0 zone resets 00:22:41.429 slat (nsec): min=1441, max=19248, avg=1873.00, stdev=495.15 00:22:41.430 clat (usec): min=2542, max=6794, avg=3597.85, stdev=82.39 00:22:41.430 lat (usec): min=2551, max=6796, avg=3599.72, stdev=82.30 00:22:41.430 clat percentiles (usec): 00:22:41.430 | 1.00th=[ 3556], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3589], 00:22:41.430 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:22:41.430 | 70.00th=[ 3589], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:22:41.430 | 99.00th=[ 3687], 99.50th=[ 3687], 99.90th=[ 4555], 99.95th=[ 5473], 00:22:41.430 | 99.99th=[ 6390] 00:22:41.430 bw ( KiB/s): min=69144, max=71168, per=100.00%, avg=70636.00, stdev=995.10, samples=4 00:22:41.430 iops : min=17286, max=17792, avg=17659.00, stdev=248.78, samples=4 00:22:41.430 lat (msec) : 2=0.01%, 4=99.82%, 10=0.18% 00:22:41.430 cpu : usr=99.50%, sys=0.05%, ctx=16, majf=0, minf=4 00:22:41.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:41.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:41.430 issued rwts: total=35381,35373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:41.430 00:22:41.430 Run status group 0 (all jobs): 00:22:41.430 READ: bw=69.0MiB/s (72.3MB/s), 69.0MiB/s-69.0MiB/s (72.3MB/s-72.3MB/s), io=138MiB (145MB), run=2004-2004msec 00:22:41.430 WRITE: bw=68.9MiB/s (72.3MB/s), 68.9MiB/s-68.9MiB/s (72.3MB/s-72.3MB/s), io=138MiB (145MB), run=2004-2004msec 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:41.430 13:53:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:22:41.430 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:41.430 fio-3.35 00:22:41.430 Starting 1 thread 00:22:41.430 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.952 00:22:43.952 test: (groupid=0, jobs=1): err= 0: pid=2561124: Mon Jul 15 13:53:10 2024 00:22:43.952 read: IOPS=14.2k, BW=222MiB/s (233MB/s)(435MiB/1958msec) 00:22:43.952 slat (nsec): min=2310, max=51028, avg=2614.17, stdev=996.24 00:22:43.952 clat (usec): min=459, max=8725, avg=1655.31, stdev=1339.73 00:22:43.952 lat (usec): min=461, max=8742, avg=1657.92, stdev=1340.02 00:22:43.952 clat percentiles (usec): 00:22:43.952 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 930], 00:22:43.952 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1188], 60.00th=[ 1303], 00:22:43.952 | 70.00th=[ 1434], 80.00th=[ 1631], 90.00th=[ 4424], 95.00th=[ 5080], 00:22:43.952 | 99.00th=[ 6456], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 7832], 00:22:43.952 | 99.99th=[ 8717] 00:22:43.952 bw ( KiB/s): min=109504, max=113984, per=48.93%, avg=111248.00, stdev=1920.98, samples=4 00:22:43.952 iops : min= 6844, max= 7124, avg=6953.00, stdev=120.06, samples=4 00:22:43.952 write: IOPS=8128, BW=127MiB/s (133MB/s)(227MiB/1785msec); 0 zone resets 00:22:43.952 slat (usec): min=27, max=113, avg=30.27, stdev= 5.51 00:22:43.952 clat (usec): min=4737, max=20107, avg=12789.30, stdev=1765.60 00:22:43.952 lat (usec): min=4769, max=20138, avg=12819.57, stdev=1765.45 00:22:43.952 clat percentiles (usec): 00:22:43.952 | 1.00th=[ 8094], 5.00th=[10159], 10.00th=[10814], 20.00th=[11469], 00:22:43.952 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13173], 00:22:43.952 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15008], 95.00th=[15795], 00:22:43.952 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:22:43.952 | 99.99th=[20055] 00:22:43.952 bw ( KiB/s): min=111232, max=116992, per=88.56%, avg=115184.00, stdev=2658.57, samples=4 00:22:43.952 iops : min= 6952, max= 7312, avg=7199.00, stdev=166.16, samples=4 00:22:43.952 lat (usec) : 500=0.01%, 750=1.84%, 1000=17.98% 00:22:43.952 lat (msec) : 2=36.95%, 4=2.20%, 10=8.09%, 20=32.93%, 50=0.01% 00:22:43.952 cpu : usr=96.21%, sys=2.05%, ctx=182, majf=0, minf=3 00:22:43.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:43.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.952 issued rwts: total=27823,14510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.952 00:22:43.952 Run status group 0 (all jobs): 00:22:43.952 READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=435MiB (456MB), run=1958-1958msec 00:22:43.952 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=227MiB (238MB), run=1785-1785msec 00:22:43.952 13:53:10 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.952 13:53:10 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:43.952 13:53:10 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:43.952 13:53:10 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.953 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:43.953 rmmod nvme_rdma 00:22:43.953 rmmod nvme_fabrics 00:22:44.209 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.209 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:44.209 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:44.209 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2560195 ']' 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2560195 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2560195 ']' 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2560195 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2560195 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2560195' 00:22:44.210 killing process with pid 2560195 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2560195 00:22:44.210 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2560195 00:22:44.468 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.468 13:53:10 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:44.468 00:22:44.468 real 0m15.091s 00:22:44.468 user 0m43.565s 00:22:44.468 sys 0m6.275s 00:22:44.468 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.468 13:53:10 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.468 ************************************ 00:22:44.468 END TEST nvmf_fio_host 00:22:44.468 ************************************ 00:22:44.468 13:53:10 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:44.468 13:53:10 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:44.468 13:53:10 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.468 13:53:10 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.468 13:53:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:44.468 ************************************ 00:22:44.468 START TEST nvmf_failover 00:22:44.468 ************************************ 00:22:44.468 13:53:10 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:22:44.726 * Looking for test storage... 00:22:44.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.726 13:53:11 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.727 13:53:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:51.353 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:51.353 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:51.353 Found net devices under 0000:18:00.0: mlx_0_0 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:51.353 Found net devices under 0000:18:00.1: mlx_0_1 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:51.353 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:51.354 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:51.354 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:22:51.354 altname enp24s0f0np0 00:22:51.354 altname ens785f0np0 00:22:51.354 inet 192.168.100.8/24 scope global mlx_0_0 00:22:51.354 valid_lft forever preferred_lft forever 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:51.354 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:51.354 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:22:51.354 altname enp24s0f1np1 00:22:51.354 altname ens785f1np1 00:22:51.354 inet 192.168.100.9/24 scope global mlx_0_1 00:22:51.354 valid_lft forever preferred_lft forever 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:51.354 192.168.100.9' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:51.354 192.168.100.9' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:51.354 192.168.100.9' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:51.354 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2564387 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2564387 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2564387 ']' 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.613 13:53:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:51.613 [2024-07-15 13:53:17.960942] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:22:51.613 [2024-07-15 13:53:17.961002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.613 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.613 [2024-07-15 13:53:18.046582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:51.613 [2024-07-15 13:53:18.131606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.613 [2024-07-15 13:53:18.131650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.613 [2024-07-15 13:53:18.131664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.613 [2024-07-15 13:53:18.131690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.613 [2024-07-15 13:53:18.131700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.613 [2024-07-15 13:53:18.131820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.613 [2024-07-15 13:53:18.131922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.613 [2024-07-15 13:53:18.131923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.544 13:53:18 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:52.544 [2024-07-15 13:53:19.009855] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a33a80/0x1a37f70) succeed. 00:22:52.544 [2024-07-15 13:53:19.019368] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a35020/0x1a79600) succeed. 00:22:52.801 13:53:19 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:53.057 Malloc0 00:22:53.057 13:53:19 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.057 13:53:19 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.315 13:53:19 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:53.571 [2024-07-15 13:53:19.893920] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:53.571 13:53:19 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:53.571 [2024-07-15 13:53:20.082282] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:53.829 [2024-07-15 13:53:20.270909] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2564610 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2564610 /var/tmp/bdevperf.sock 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2564610 ']' 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.829 13:53:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.758 13:53:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.758 13:53:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:54.758 13:53:21 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.014 NVMe0n1 00:22:55.014 13:53:21 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.270 00:22:55.270 13:53:21 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2564806 00:22:55.270 13:53:21 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.270 13:53:21 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:56.199 13:53:22 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:56.456 13:53:22 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:59.727 13:53:25 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.727 00:22:59.727 13:53:26 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:59.983 13:53:26 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:03.254 13:53:29 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:03.254 [2024-07-15 13:53:29.516035] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:03.254 13:53:29 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:04.245 13:53:30 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:04.245 13:53:30 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 2564806 00:23:10.816 0 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 2564610 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2564610 ']' 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2564610 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2564610 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2564610' 00:23:10.816 killing process with pid 2564610 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2564610 00:23:10.816 13:53:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2564610 00:23:10.816 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.816 [2024-07-15 13:53:20.339509] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:10.816 [2024-07-15 13:53:20.339587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564610 ] 00:23:10.816 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.816 [2024-07-15 13:53:20.426983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.816 [2024-07-15 13:53:20.508584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.816 Running I/O for 15 seconds... 00:23:10.816 [2024-07-15 13:53:23.865957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.816 [2024-07-15 13:53:23.866300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:23:10.816 [2024-07-15 13:53:23.866309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.866984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.866993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.817 [2024-07-15 13:53:23.867127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:23:10.817 [2024-07-15 13:53:23.867136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182b00 00:23:10.818 [2024-07-15 13:53:23.867278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.818 [2024-07-15 13:53:23.867972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.818 [2024-07-15 13:53:23.867983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.867992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.868596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:23.868604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.870440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.819 [2024-07-15 13:53:23.870454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.819 [2024-07-15 13:53:23.870463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:23:10.819 [2024-07-15 13:53:23.870473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:23.870517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:23:10.819 [2024-07-15 13:53:23.870529] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:10.819 [2024-07-15 13:53:23.870540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.819 [2024-07-15 13:53:23.873352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.819 [2024-07-15 13:53:23.887977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:10.819 [2024-07-15 13:53:23.930785] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.819 [2024-07-15 13:53:27.342656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:27.342695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:27.342725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:27.342746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:27.342767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:23:10.819 [2024-07-15 13:53:27.342788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:23:10.819 [2024-07-15 13:53:27.342809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.819 [2024-07-15 13:53:27.342820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.819 [2024-07-15 13:53:27.342830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.342975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.342986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.342995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.820 [2024-07-15 13:53:27.343299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.820 [2024-07-15 13:53:27.343414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:23:10.820 [2024-07-15 13:53:27.343423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.343750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.343989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.344009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.344029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.344049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.821 [2024-07-15 13:53:27.344069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:23:10.821 [2024-07-15 13:53:27.344089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.821 [2024-07-15 13:53:27.344099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:23:10.822 [2024-07-15 13:53:27.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.822 [2024-07-15 13:53:27.344895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.822 [2024-07-15 13:53:27.344904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.344915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.344924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.344934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.344963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.344975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.344994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.345004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.345023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:27.345043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.345261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:27.345270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.347009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.823 [2024-07-15 13:53:27.347022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.823 [2024-07-15 13:53:27.347031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113248 len:8 PRP1 0x0 PRP2 0x0 00:23:10.823 [2024-07-15 13:53:27.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:27.347086] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:23:10.823 [2024-07-15 13:53:27.347098] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:23:10.823 [2024-07-15 13:53:27.347109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.823 [2024-07-15 13:53:27.349934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.823 [2024-07-15 13:53:27.364382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:10.823 [2024-07-15 13:53:27.407137] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.823 [2024-07-15 13:53:31.718311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:23:10.823 [2024-07-15 13:53:31.718478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:31.718499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:31.718519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:31.718544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:31.718569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.823 [2024-07-15 13:53:31.718581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.823 [2024-07-15 13:53:31.718590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:23:10.824 [2024-07-15 13:53:31.718938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.718958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.718978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.718989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.718998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.824 [2024-07-15 13:53:31.719401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.824 [2024-07-15 13:53:31.719412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.825 [2024-07-15 13:53:31.719789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:23:10.825 [2024-07-15 13:53:31.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.825 [2024-07-15 13:53:31.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.719991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-07-15 13:53:31.720296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.826 [2024-07-15 13:53:31.720316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.826 [2024-07-15 13:53:31.720702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:23:10.826 [2024-07-15 13:53:31.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.720947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:23:10.827 [2024-07-15 13:53:31.720956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:118c2000 sqhd:52b0 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.722721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.827 [2024-07-15 13:53:31.722737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.827 [2024-07-15 13:53:31.722746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:23:10.827 [2024-07-15 13:53:31.722756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.827 [2024-07-15 13:53:31.722803] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:23:10.827 [2024-07-15 13:53:31.722815] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:23:10.827 [2024-07-15 13:53:31.722826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.827 [2024-07-15 13:53:31.725648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.827 [2024-07-15 13:53:31.740071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:10.827 [2024-07-15 13:53:31.781271] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.827 00:23:10.827 Latency(us) 00:23:10.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.827 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.827 Verification LBA range: start 0x0 length 0x4000 00:23:10.827 NVMe0n1 : 15.01 14146.13 55.26 278.63 0.00 8850.59 356.17 1021221.84 00:23:10.827 =================================================================================================================== 00:23:10.827 Total : 14146.13 55.26 278.63 0.00 8850.59 356.17 1021221.84 00:23:10.827 Received shutdown signal, test time was about 15.000000 seconds 00:23:10.827 00:23:10.827 Latency(us) 00:23:10.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.827 =================================================================================================================== 00:23:10.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2566809 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2566809 /var/tmp/bdevperf.sock 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2566809 ']' 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.827 13:53:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.758 13:53:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.758 13:53:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:11.758 13:53:38 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:11.758 [2024-07-15 13:53:38.155528] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:11.758 13:53:38 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:12.014 [2024-07-15 13:53:38.332159] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:12.014 13:53:38 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.271 NVMe0n1 00:23:12.271 13:53:38 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.528 00:23:12.528 13:53:38 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.784 00:23:12.784 13:53:39 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:12.784 13:53:39 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.784 13:53:39 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:13.041 13:53:39 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:16.312 13:53:42 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.312 13:53:42 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:16.312 13:53:42 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.312 13:53:42 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2567532 00:23:16.312 13:53:42 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 2567532 00:23:17.686 0 00:23:17.686 13:53:43 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.686 [2024-07-15 13:53:37.184715] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:17.686 [2024-07-15 13:53:37.184782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566809 ] 00:23:17.686 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.686 [2024-07-15 13:53:37.273217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.686 [2024-07-15 13:53:37.355015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.686 [2024-07-15 13:53:39.461064] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:23:17.686 [2024-07-15 13:53:39.461631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.686 [2024-07-15 13:53:39.461665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.686 [2024-07-15 13:53:39.483301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:17.686 [2024-07-15 13:53:39.496312] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.686 Running I/O for 1 seconds... 00:23:17.686 00:23:17.686 Latency(us) 00:23:17.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.686 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:17.686 Verification LBA range: start 0x0 length 0x4000 00:23:17.686 NVMe0n1 : 1.00 17860.13 69.77 0.00 0.00 7125.63 247.54 10656.72 00:23:17.686 =================================================================================================================== 00:23:17.686 Total : 17860.13 69.77 0.00 0.00 7125.63 247.54 10656.72 00:23:17.687 13:53:43 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.687 13:53:43 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:17.687 13:53:43 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.687 13:53:44 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.687 13:53:44 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:17.943 13:53:44 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.200 13:53:44 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 2566809 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2566809 ']' 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2566809 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2566809 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2566809' 00:23:21.510 killing process with pid 2566809 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2566809 00:23:21.510 13:53:47 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2566809 00:23:21.510 13:53:48 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:21.510 13:53:48 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:21.767 rmmod nvme_rdma 00:23:21.767 rmmod nvme_fabrics 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2564387 ']' 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2564387 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2564387 ']' 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2564387 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.767 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2564387 00:23:22.025 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.025 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.025 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2564387' 00:23:22.025 killing process with pid 2564387 00:23:22.025 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2564387 00:23:22.025 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2564387 00:23:22.285 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.285 13:53:48 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:22.285 00:23:22.285 real 0m37.674s 00:23:22.285 user 2m4.576s 00:23:22.285 sys 0m7.815s 00:23:22.285 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.285 13:53:48 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:22.285 ************************************ 00:23:22.285 END TEST nvmf_failover 00:23:22.285 ************************************ 00:23:22.285 13:53:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:22.285 13:53:48 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:22.285 13:53:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.285 13:53:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.285 13:53:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:22.285 ************************************ 00:23:22.285 START TEST nvmf_host_discovery 00:23:22.285 ************************************ 00:23:22.285 13:53:48 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:23:22.285 * Looking for test storage... 00:23:22.544 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.544 13:53:48 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:23:22.545 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:23:22.545 00:23:22.545 real 0m0.145s 00:23:22.545 user 0m0.064s 00:23:22.545 sys 0m0.091s 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.545 ************************************ 00:23:22.545 END TEST nvmf_host_discovery 00:23:22.545 ************************************ 00:23:22.545 13:53:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:22.545 13:53:48 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:22.545 13:53:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.545 13:53:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.545 13:53:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:22.545 ************************************ 00:23:22.545 START TEST nvmf_host_multipath_status 00:23:22.545 ************************************ 00:23:22.545 13:53:48 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:23:22.545 * Looking for test storage... 00:23:22.545 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.545 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.804 13:53:49 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:23:29.375 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:23:29.375 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:23:29.376 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:29.376 Found net devices under 0000:18:00.0: mlx_0_0 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:29.376 Found net devices under 0000:18:00.1: mlx_0_1 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:29.376 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:29.376 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:23:29.376 altname enp24s0f0np0 00:23:29.376 altname ens785f0np0 00:23:29.376 inet 192.168.100.8/24 scope global mlx_0_0 00:23:29.376 valid_lft forever preferred_lft forever 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:29.376 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:29.377 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:29.377 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:23:29.377 altname enp24s0f1np1 00:23:29.377 altname ens785f1np1 00:23:29.377 inet 192.168.100.9/24 scope global mlx_0_1 00:23:29.377 valid_lft forever preferred_lft forever 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:29.377 192.168.100.9' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:29.377 192.168.100.9' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:29.377 192.168.100.9' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2571374 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2571374 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2571374 ']' 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.377 13:53:55 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.636 [2024-07-15 13:53:55.935360] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:23:29.636 [2024-07-15 13:53:55.935429] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.636 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.636 [2024-07-15 13:53:56.021310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:29.636 [2024-07-15 13:53:56.116218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.636 [2024-07-15 13:53:56.116259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.636 [2024-07-15 13:53:56.116269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.636 [2024-07-15 13:53:56.116277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.636 [2024-07-15 13:53:56.116288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.636 [2024-07-15 13:53:56.116353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.636 [2024-07-15 13:53:56.116354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2571374 00:23:30.571 13:53:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:30.571 [2024-07-15 13:53:56.970213] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10c3a30/0x10c7f20) succeed. 00:23:30.571 [2024-07-15 13:53:56.979359] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10c4f30/0x11095b0) succeed. 00:23:30.571 13:53:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:30.828 Malloc0 00:23:30.828 13:53:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:31.086 13:53:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.343 13:53:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:31.343 [2024-07-15 13:53:57.827915] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:31.343 13:53:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:31.601 [2024-07-15 13:53:58.012253] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2571594 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2571594 /var/tmp/bdevperf.sock 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2571594 ']' 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.601 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:32.532 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.532 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:32.532 13:53:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:32.789 13:53:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:33.046 Nvme0n1 00:23:33.046 13:53:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:33.046 Nvme0n1 00:23:33.316 13:53:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:33.316 13:53:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:35.296 13:54:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:35.296 13:54:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:35.296 13:54:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:35.553 13:54:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:36.487 13:54:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:36.487 13:54:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:36.487 13:54:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.487 13:54:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.745 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.745 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:36.745 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.745 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.003 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.003 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.003 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.003 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.261 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.520 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.520 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.520 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.520 13:54:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.778 13:54:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.778 13:54:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:37.778 13:54:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:38.036 13:54:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:38.036 13:54:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.410 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.667 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.667 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.667 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.667 13:54:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.667 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.667 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.667 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.667 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.936 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.936 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.936 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.936 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:40.195 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:40.453 13:54:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:40.711 13:54:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:41.644 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:41.644 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.644 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.644 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.902 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.902 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.902 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.902 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.159 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.417 13:54:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.674 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.674 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.674 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.674 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.932 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.932 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.932 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:42.932 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:43.190 13:54:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:44.124 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:44.124 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.124 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.124 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.382 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.382 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.382 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.382 13:54:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.640 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.640 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.640 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.640 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.897 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.897 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.897 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.897 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.156 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.413 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.413 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:45.413 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:45.671 13:54:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:45.671 13:54:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.041 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.298 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.298 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.298 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.298 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.555 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.555 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.555 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.555 13:54:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:47.812 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:23:48.069 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:48.326 13:54:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:49.258 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:49.258 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:49.258 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.258 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.516 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.516 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.516 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.516 13:54:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.773 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.031 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.031 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:50.031 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.031 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.289 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.289 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.289 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.289 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.546 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.546 13:54:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:50.546 13:54:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:50.546 13:54:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:23:50.804 13:54:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:51.062 13:54:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:51.995 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:51.995 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.995 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.995 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.253 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.253 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:52.253 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.253 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.511 13:54:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.769 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.769 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.769 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.769 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.027 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.027 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.027 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.027 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.285 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.285 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:53.285 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:53.285 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:23:53.543 13:54:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:54.478 13:54:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:54.478 13:54:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.478 13:54:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.478 13:54:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.779 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.779 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.779 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.779 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.054 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.311 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.312 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.312 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.312 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.569 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.569 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.569 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.569 13:54:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.569 13:54:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.569 13:54:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:55.569 13:54:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:55.829 13:54:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:23:56.088 13:54:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:57.033 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:57.033 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:57.033 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.033 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.291 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.291 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.291 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.291 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.549 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.549 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.549 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.549 13:54:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.549 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.549 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.549 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.549 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.806 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.806 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.806 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.806 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.064 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.064 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.064 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.064 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.321 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.321 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:58.321 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:23:58.321 13:54:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:23:58.579 13:54:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:59.512 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:59.512 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.512 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.512 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.770 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.770 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.770 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.770 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.028 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.028 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.028 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.028 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.286 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.544 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.544 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.544 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.544 13:54:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2571594 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2571594 ']' 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2571594 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2571594 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2571594' 00:24:00.802 killing process with pid 2571594 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2571594 00:24:00.802 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2571594 00:24:01.064 Connection closed with partial response: 00:24:01.064 00:24:01.064 00:24:01.064 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2571594 00:24:01.064 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.064 [2024-07-15 13:53:58.088161] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:24:01.064 [2024-07-15 13:53:58.088226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571594 ] 00:24:01.064 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.064 [2024-07-15 13:53:58.172677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.064 [2024-07-15 13:53:58.261559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.064 Running I/O for 90 seconds... 00:24:01.064 [2024-07-15 13:54:11.967168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183100 00:24:01.064 [2024-07-15 13:54:11.967212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.064 [2024-07-15 13:54:11.967253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183100 00:24:01.064 [2024-07-15 13:54:11.967265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.064 [2024-07-15 13:54:11.967277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:24:01.064 [2024-07-15 13:54:11.967287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.064 [2024-07-15 13:54:11.967298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183100 00:24:01.064 [2024-07-15 13:54:11.967308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.064 [2024-07-15 13:54:11.967320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183100 00:24:01.065 [2024-07-15 13:54:11.967755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.967981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.967992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.065 [2024-07-15 13:54:11.968143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:01.065 [2024-07-15 13:54:11.968154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.968981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.968990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.066 [2024-07-15 13:54:11.969659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:24:01.066 [2024-07-15 13:54:11.969812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:01.066 [2024-07-15 13:54:11.969828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.969978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.969988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.970015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.067 [2024-07-15 13:54:11.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183100 00:24:01.067 [2024-07-15 13:54:11.970783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.067 [2024-07-15 13:54:11.970799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:11.970808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:11.970825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:11.970834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:11.970850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:11.970859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:11.970876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:11.970886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.975342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.975424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.975436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.975449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.975459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.975471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.975481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.975493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.975502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.068 [2024-07-15 13:54:24.976648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183100 00:24:01.068 [2024-07-15 13:54:24.976690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.068 [2024-07-15 13:54:24.976702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.976967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.976979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.976988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.069 [2024-07-15 13:54:24.977349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:01.069 [2024-07-15 13:54:24.977382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:24:01.069 [2024-07-15 13:54:24.977391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:01.069 Received shutdown signal, test time was about 27.464496 seconds 00:24:01.069 00:24:01.069 Latency(us) 00:24:01.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.069 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.069 Verification LBA range: start 0x0 length 0x4000 00:24:01.069 Nvme0n1 : 27.46 15940.80 62.27 0.00 0.00 8010.19 54.54 3019898.88 00:24:01.069 =================================================================================================================== 00:24:01.069 Total : 15940.80 62.27 0.00 0.00 8010.19 54.54 3019898.88 00:24:01.069 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:01.327 rmmod nvme_rdma 00:24:01.327 rmmod nvme_fabrics 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2571374 ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2571374 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2571374 ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2571374 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2571374 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2571374' 00:24:01.327 killing process with pid 2571374 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2571374 00:24:01.327 13:54:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2571374 00:24:01.586 13:54:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.586 13:54:28 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:01.586 00:24:01.586 real 0m39.093s 00:24:01.586 user 1m50.357s 00:24:01.586 sys 0m9.556s 00:24:01.586 13:54:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.586 13:54:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:01.586 ************************************ 00:24:01.586 END TEST nvmf_host_multipath_status 00:24:01.586 ************************************ 00:24:01.586 13:54:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:01.586 13:54:28 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:01.586 13:54:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.586 13:54:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.586 13:54:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:01.586 ************************************ 00:24:01.586 START TEST nvmf_discovery_remove_ifc 00:24:01.586 ************************************ 00:24:01.586 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:01.845 * Looking for test storage... 00:24:01.845 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:01.845 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:24:01.845 00:24:01.845 real 0m0.139s 00:24:01.845 user 0m0.058s 00:24:01.845 sys 0m0.092s 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.845 13:54:28 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.845 ************************************ 00:24:01.845 END TEST nvmf_discovery_remove_ifc 00:24:01.845 ************************************ 00:24:01.845 13:54:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:01.845 13:54:28 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:01.845 13:54:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:01.845 13:54:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.845 13:54:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:01.845 ************************************ 00:24:01.845 START TEST nvmf_identify_kernel_target 00:24:01.845 ************************************ 00:24:01.845 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:02.104 * Looking for test storage... 00:24:02.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.104 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.105 13:54:28 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:08.671 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:08.671 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:08.671 Found net devices under 0000:18:00.0: mlx_0_0 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:08.671 Found net devices under 0000:18:00.1: mlx_0_1 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:08.671 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:08.672 13:54:34 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:08.672 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:08.672 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:24:08.672 altname enp24s0f0np0 00:24:08.672 altname ens785f0np0 00:24:08.672 inet 192.168.100.8/24 scope global mlx_0_0 00:24:08.672 valid_lft forever preferred_lft forever 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:08.672 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:08.672 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:24:08.672 altname enp24s0f1np1 00:24:08.672 altname ens785f1np1 00:24:08.672 inet 192.168.100.9/24 scope global mlx_0_1 00:24:08.672 valid_lft forever preferred_lft forever 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:08.672 192.168.100.9' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:08.672 192.168.100.9' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:08.672 192.168.100.9' 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:08.672 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.932 13:54:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:24:12.222 Waiting for block devices as requested 00:24:12.222 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:24:12.222 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:12.222 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:12.222 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:12.482 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:12.482 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:12.742 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:12.742 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:12.742 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:13.001 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:13.001 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:13.001 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:13.261 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:13.261 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:13.261 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:13.520 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:13.521 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:13.521 13:54:39 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:13.521 No valid GPT data, bailing 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:13.521 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:24:13.780 00:24:13.780 Discovery Log Number of Records 2, Generation counter 2 00:24:13.780 =====Discovery Log Entry 0====== 00:24:13.780 trtype: rdma 00:24:13.780 adrfam: ipv4 00:24:13.780 subtype: current discovery subsystem 00:24:13.780 treq: not specified, sq flow control disable supported 00:24:13.780 portid: 1 00:24:13.780 trsvcid: 4420 00:24:13.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:13.780 traddr: 192.168.100.8 00:24:13.780 eflags: none 00:24:13.780 rdma_prtype: not specified 00:24:13.780 rdma_qptype: connected 00:24:13.780 rdma_cms: rdma-cm 00:24:13.780 rdma_pkey: 0x0000 00:24:13.780 =====Discovery Log Entry 1====== 00:24:13.780 trtype: rdma 00:24:13.780 adrfam: ipv4 00:24:13.780 subtype: nvme subsystem 00:24:13.780 treq: not specified, sq flow control disable supported 00:24:13.780 portid: 1 00:24:13.780 trsvcid: 4420 00:24:13.780 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:13.780 traddr: 192.168.100.8 00:24:13.780 eflags: none 00:24:13.780 rdma_prtype: not specified 00:24:13.780 rdma_qptype: connected 00:24:13.780 rdma_cms: rdma-cm 00:24:13.780 rdma_pkey: 0x0000 00:24:13.780 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:24:13.780 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:13.780 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.040 ===================================================== 00:24:14.040 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:14.040 ===================================================== 00:24:14.040 Controller Capabilities/Features 00:24:14.040 ================================ 00:24:14.040 Vendor ID: 0000 00:24:14.040 Subsystem Vendor ID: 0000 00:24:14.040 Serial Number: 5846d73870590fc0db2e 00:24:14.040 Model Number: Linux 00:24:14.040 Firmware Version: 6.7.0-68 00:24:14.040 Recommended Arb Burst: 0 00:24:14.040 IEEE OUI Identifier: 00 00 00 00:24:14.040 Multi-path I/O 00:24:14.040 May have multiple subsystem ports: No 00:24:14.040 May have multiple controllers: No 00:24:14.040 Associated with SR-IOV VF: No 00:24:14.040 Max Data Transfer Size: Unlimited 00:24:14.040 Max Number of Namespaces: 0 00:24:14.040 Max Number of I/O Queues: 1024 00:24:14.040 NVMe Specification Version (VS): 1.3 00:24:14.040 NVMe Specification Version (Identify): 1.3 00:24:14.040 Maximum Queue Entries: 128 00:24:14.040 Contiguous Queues Required: No 00:24:14.040 Arbitration Mechanisms Supported 00:24:14.040 Weighted Round Robin: Not Supported 00:24:14.040 Vendor Specific: Not Supported 00:24:14.040 Reset Timeout: 7500 ms 00:24:14.040 Doorbell Stride: 4 bytes 00:24:14.040 NVM Subsystem Reset: Not Supported 00:24:14.040 Command Sets Supported 00:24:14.040 NVM Command Set: Supported 00:24:14.040 Boot Partition: Not Supported 00:24:14.040 Memory Page Size Minimum: 4096 bytes 00:24:14.040 Memory Page Size Maximum: 4096 bytes 00:24:14.040 Persistent Memory Region: Not Supported 00:24:14.040 Optional Asynchronous Events Supported 00:24:14.040 Namespace Attribute Notices: Not Supported 00:24:14.040 Firmware Activation Notices: Not Supported 00:24:14.040 ANA Change Notices: Not Supported 00:24:14.040 PLE Aggregate Log Change Notices: Not Supported 00:24:14.040 LBA Status Info Alert Notices: Not Supported 00:24:14.040 EGE Aggregate Log Change Notices: Not Supported 00:24:14.040 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.040 Zone Descriptor Change Notices: Not Supported 00:24:14.040 Discovery Log Change Notices: Supported 00:24:14.040 Controller Attributes 00:24:14.040 128-bit Host Identifier: Not Supported 00:24:14.040 Non-Operational Permissive Mode: Not Supported 00:24:14.040 NVM Sets: Not Supported 00:24:14.040 Read Recovery Levels: Not Supported 00:24:14.040 Endurance Groups: Not Supported 00:24:14.040 Predictable Latency Mode: Not Supported 00:24:14.040 Traffic Based Keep ALive: Not Supported 00:24:14.040 Namespace Granularity: Not Supported 00:24:14.040 SQ Associations: Not Supported 00:24:14.040 UUID List: Not Supported 00:24:14.040 Multi-Domain Subsystem: Not Supported 00:24:14.040 Fixed Capacity Management: Not Supported 00:24:14.040 Variable Capacity Management: Not Supported 00:24:14.040 Delete Endurance Group: Not Supported 00:24:14.040 Delete NVM Set: Not Supported 00:24:14.040 Extended LBA Formats Supported: Not Supported 00:24:14.040 Flexible Data Placement Supported: Not Supported 00:24:14.040 00:24:14.040 Controller Memory Buffer Support 00:24:14.040 ================================ 00:24:14.040 Supported: No 00:24:14.040 00:24:14.040 Persistent Memory Region Support 00:24:14.040 ================================ 00:24:14.040 Supported: No 00:24:14.040 00:24:14.040 Admin Command Set Attributes 00:24:14.040 ============================ 00:24:14.040 Security Send/Receive: Not Supported 00:24:14.040 Format NVM: Not Supported 00:24:14.040 Firmware Activate/Download: Not Supported 00:24:14.040 Namespace Management: Not Supported 00:24:14.040 Device Self-Test: Not Supported 00:24:14.040 Directives: Not Supported 00:24:14.040 NVMe-MI: Not Supported 00:24:14.040 Virtualization Management: Not Supported 00:24:14.040 Doorbell Buffer Config: Not Supported 00:24:14.040 Get LBA Status Capability: Not Supported 00:24:14.040 Command & Feature Lockdown Capability: Not Supported 00:24:14.040 Abort Command Limit: 1 00:24:14.040 Async Event Request Limit: 1 00:24:14.040 Number of Firmware Slots: N/A 00:24:14.040 Firmware Slot 1 Read-Only: N/A 00:24:14.040 Firmware Activation Without Reset: N/A 00:24:14.040 Multiple Update Detection Support: N/A 00:24:14.040 Firmware Update Granularity: No Information Provided 00:24:14.040 Per-Namespace SMART Log: No 00:24:14.040 Asymmetric Namespace Access Log Page: Not Supported 00:24:14.041 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:14.041 Command Effects Log Page: Not Supported 00:24:14.041 Get Log Page Extended Data: Supported 00:24:14.041 Telemetry Log Pages: Not Supported 00:24:14.041 Persistent Event Log Pages: Not Supported 00:24:14.041 Supported Log Pages Log Page: May Support 00:24:14.041 Commands Supported & Effects Log Page: Not Supported 00:24:14.041 Feature Identifiers & Effects Log Page:May Support 00:24:14.041 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.041 Data Area 4 for Telemetry Log: Not Supported 00:24:14.041 Error Log Page Entries Supported: 1 00:24:14.041 Keep Alive: Not Supported 00:24:14.041 00:24:14.041 NVM Command Set Attributes 00:24:14.041 ========================== 00:24:14.041 Submission Queue Entry Size 00:24:14.041 Max: 1 00:24:14.041 Min: 1 00:24:14.041 Completion Queue Entry Size 00:24:14.041 Max: 1 00:24:14.041 Min: 1 00:24:14.041 Number of Namespaces: 0 00:24:14.041 Compare Command: Not Supported 00:24:14.041 Write Uncorrectable Command: Not Supported 00:24:14.041 Dataset Management Command: Not Supported 00:24:14.041 Write Zeroes Command: Not Supported 00:24:14.041 Set Features Save Field: Not Supported 00:24:14.041 Reservations: Not Supported 00:24:14.041 Timestamp: Not Supported 00:24:14.041 Copy: Not Supported 00:24:14.041 Volatile Write Cache: Not Present 00:24:14.041 Atomic Write Unit (Normal): 1 00:24:14.041 Atomic Write Unit (PFail): 1 00:24:14.041 Atomic Compare & Write Unit: 1 00:24:14.041 Fused Compare & Write: Not Supported 00:24:14.041 Scatter-Gather List 00:24:14.041 SGL Command Set: Supported 00:24:14.041 SGL Keyed: Supported 00:24:14.041 SGL Bit Bucket Descriptor: Not Supported 00:24:14.041 SGL Metadata Pointer: Not Supported 00:24:14.041 Oversized SGL: Not Supported 00:24:14.041 SGL Metadata Address: Not Supported 00:24:14.041 SGL Offset: Supported 00:24:14.041 Transport SGL Data Block: Not Supported 00:24:14.041 Replay Protected Memory Block: Not Supported 00:24:14.041 00:24:14.041 Firmware Slot Information 00:24:14.041 ========================= 00:24:14.041 Active slot: 0 00:24:14.041 00:24:14.041 00:24:14.041 Error Log 00:24:14.041 ========= 00:24:14.041 00:24:14.041 Active Namespaces 00:24:14.041 ================= 00:24:14.041 Discovery Log Page 00:24:14.041 ================== 00:24:14.041 Generation Counter: 2 00:24:14.041 Number of Records: 2 00:24:14.041 Record Format: 0 00:24:14.041 00:24:14.041 Discovery Log Entry 0 00:24:14.041 ---------------------- 00:24:14.041 Transport Type: 1 (RDMA) 00:24:14.041 Address Family: 1 (IPv4) 00:24:14.041 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:14.041 Entry Flags: 00:24:14.041 Duplicate Returned Information: 0 00:24:14.041 Explicit Persistent Connection Support for Discovery: 0 00:24:14.041 Transport Requirements: 00:24:14.041 Secure Channel: Not Specified 00:24:14.041 Port ID: 1 (0x0001) 00:24:14.041 Controller ID: 65535 (0xffff) 00:24:14.041 Admin Max SQ Size: 32 00:24:14.041 Transport Service Identifier: 4420 00:24:14.041 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:14.041 Transport Address: 192.168.100.8 00:24:14.041 Transport Specific Address Subtype - RDMA 00:24:14.041 RDMA QP Service Type: 1 (Reliable Connected) 00:24:14.041 RDMA Provider Type: 1 (No provider specified) 00:24:14.041 RDMA CM Service: 1 (RDMA_CM) 00:24:14.041 Discovery Log Entry 1 00:24:14.041 ---------------------- 00:24:14.041 Transport Type: 1 (RDMA) 00:24:14.041 Address Family: 1 (IPv4) 00:24:14.041 Subsystem Type: 2 (NVM Subsystem) 00:24:14.041 Entry Flags: 00:24:14.041 Duplicate Returned Information: 0 00:24:14.041 Explicit Persistent Connection Support for Discovery: 0 00:24:14.041 Transport Requirements: 00:24:14.041 Secure Channel: Not Specified 00:24:14.041 Port ID: 1 (0x0001) 00:24:14.041 Controller ID: 65535 (0xffff) 00:24:14.041 Admin Max SQ Size: 32 00:24:14.041 Transport Service Identifier: 4420 00:24:14.041 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:14.041 Transport Address: 192.168.100.8 00:24:14.041 Transport Specific Address Subtype - RDMA 00:24:14.041 RDMA QP Service Type: 1 (Reliable Connected) 00:24:14.041 RDMA Provider Type: 1 (No provider specified) 00:24:14.041 RDMA CM Service: 1 (RDMA_CM) 00:24:14.041 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:14.041 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.041 get_feature(0x01) failed 00:24:14.041 get_feature(0x02) failed 00:24:14.041 get_feature(0x04) failed 00:24:14.041 ===================================================== 00:24:14.041 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:24:14.041 ===================================================== 00:24:14.041 Controller Capabilities/Features 00:24:14.041 ================================ 00:24:14.041 Vendor ID: 0000 00:24:14.041 Subsystem Vendor ID: 0000 00:24:14.041 Serial Number: 641b5f41de2bd8c06c56 00:24:14.041 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:14.041 Firmware Version: 6.7.0-68 00:24:14.041 Recommended Arb Burst: 6 00:24:14.041 IEEE OUI Identifier: 00 00 00 00:24:14.041 Multi-path I/O 00:24:14.041 May have multiple subsystem ports: Yes 00:24:14.041 May have multiple controllers: Yes 00:24:14.041 Associated with SR-IOV VF: No 00:24:14.041 Max Data Transfer Size: 1048576 00:24:14.041 Max Number of Namespaces: 1024 00:24:14.041 Max Number of I/O Queues: 128 00:24:14.041 NVMe Specification Version (VS): 1.3 00:24:14.041 NVMe Specification Version (Identify): 1.3 00:24:14.041 Maximum Queue Entries: 128 00:24:14.041 Contiguous Queues Required: No 00:24:14.041 Arbitration Mechanisms Supported 00:24:14.041 Weighted Round Robin: Not Supported 00:24:14.041 Vendor Specific: Not Supported 00:24:14.041 Reset Timeout: 7500 ms 00:24:14.041 Doorbell Stride: 4 bytes 00:24:14.041 NVM Subsystem Reset: Not Supported 00:24:14.041 Command Sets Supported 00:24:14.041 NVM Command Set: Supported 00:24:14.041 Boot Partition: Not Supported 00:24:14.041 Memory Page Size Minimum: 4096 bytes 00:24:14.041 Memory Page Size Maximum: 4096 bytes 00:24:14.041 Persistent Memory Region: Not Supported 00:24:14.041 Optional Asynchronous Events Supported 00:24:14.041 Namespace Attribute Notices: Supported 00:24:14.041 Firmware Activation Notices: Not Supported 00:24:14.041 ANA Change Notices: Supported 00:24:14.041 PLE Aggregate Log Change Notices: Not Supported 00:24:14.041 LBA Status Info Alert Notices: Not Supported 00:24:14.041 EGE Aggregate Log Change Notices: Not Supported 00:24:14.041 Normal NVM Subsystem Shutdown event: Not Supported 00:24:14.041 Zone Descriptor Change Notices: Not Supported 00:24:14.041 Discovery Log Change Notices: Not Supported 00:24:14.041 Controller Attributes 00:24:14.041 128-bit Host Identifier: Supported 00:24:14.041 Non-Operational Permissive Mode: Not Supported 00:24:14.041 NVM Sets: Not Supported 00:24:14.041 Read Recovery Levels: Not Supported 00:24:14.041 Endurance Groups: Not Supported 00:24:14.041 Predictable Latency Mode: Not Supported 00:24:14.041 Traffic Based Keep ALive: Supported 00:24:14.041 Namespace Granularity: Not Supported 00:24:14.041 SQ Associations: Not Supported 00:24:14.041 UUID List: Not Supported 00:24:14.041 Multi-Domain Subsystem: Not Supported 00:24:14.041 Fixed Capacity Management: Not Supported 00:24:14.041 Variable Capacity Management: Not Supported 00:24:14.041 Delete Endurance Group: Not Supported 00:24:14.041 Delete NVM Set: Not Supported 00:24:14.041 Extended LBA Formats Supported: Not Supported 00:24:14.041 Flexible Data Placement Supported: Not Supported 00:24:14.041 00:24:14.041 Controller Memory Buffer Support 00:24:14.041 ================================ 00:24:14.041 Supported: No 00:24:14.041 00:24:14.041 Persistent Memory Region Support 00:24:14.041 ================================ 00:24:14.041 Supported: No 00:24:14.041 00:24:14.041 Admin Command Set Attributes 00:24:14.041 ============================ 00:24:14.041 Security Send/Receive: Not Supported 00:24:14.041 Format NVM: Not Supported 00:24:14.041 Firmware Activate/Download: Not Supported 00:24:14.041 Namespace Management: Not Supported 00:24:14.041 Device Self-Test: Not Supported 00:24:14.041 Directives: Not Supported 00:24:14.041 NVMe-MI: Not Supported 00:24:14.041 Virtualization Management: Not Supported 00:24:14.041 Doorbell Buffer Config: Not Supported 00:24:14.041 Get LBA Status Capability: Not Supported 00:24:14.041 Command & Feature Lockdown Capability: Not Supported 00:24:14.041 Abort Command Limit: 4 00:24:14.041 Async Event Request Limit: 4 00:24:14.041 Number of Firmware Slots: N/A 00:24:14.041 Firmware Slot 1 Read-Only: N/A 00:24:14.041 Firmware Activation Without Reset: N/A 00:24:14.041 Multiple Update Detection Support: N/A 00:24:14.041 Firmware Update Granularity: No Information Provided 00:24:14.041 Per-Namespace SMART Log: Yes 00:24:14.041 Asymmetric Namespace Access Log Page: Supported 00:24:14.041 ANA Transition Time : 10 sec 00:24:14.041 00:24:14.041 Asymmetric Namespace Access Capabilities 00:24:14.041 ANA Optimized State : Supported 00:24:14.042 ANA Non-Optimized State : Supported 00:24:14.042 ANA Inaccessible State : Supported 00:24:14.042 ANA Persistent Loss State : Supported 00:24:14.042 ANA Change State : Supported 00:24:14.042 ANAGRPID is not changed : No 00:24:14.042 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:14.042 00:24:14.042 ANA Group Identifier Maximum : 128 00:24:14.042 Number of ANA Group Identifiers : 128 00:24:14.042 Max Number of Allowed Namespaces : 1024 00:24:14.042 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:14.042 Command Effects Log Page: Supported 00:24:14.042 Get Log Page Extended Data: Supported 00:24:14.042 Telemetry Log Pages: Not Supported 00:24:14.042 Persistent Event Log Pages: Not Supported 00:24:14.042 Supported Log Pages Log Page: May Support 00:24:14.042 Commands Supported & Effects Log Page: Not Supported 00:24:14.042 Feature Identifiers & Effects Log Page:May Support 00:24:14.042 NVMe-MI Commands & Effects Log Page: May Support 00:24:14.042 Data Area 4 for Telemetry Log: Not Supported 00:24:14.042 Error Log Page Entries Supported: 128 00:24:14.042 Keep Alive: Supported 00:24:14.042 Keep Alive Granularity: 1000 ms 00:24:14.042 00:24:14.042 NVM Command Set Attributes 00:24:14.042 ========================== 00:24:14.042 Submission Queue Entry Size 00:24:14.042 Max: 64 00:24:14.042 Min: 64 00:24:14.042 Completion Queue Entry Size 00:24:14.042 Max: 16 00:24:14.042 Min: 16 00:24:14.042 Number of Namespaces: 1024 00:24:14.042 Compare Command: Not Supported 00:24:14.042 Write Uncorrectable Command: Not Supported 00:24:14.042 Dataset Management Command: Supported 00:24:14.042 Write Zeroes Command: Supported 00:24:14.042 Set Features Save Field: Not Supported 00:24:14.042 Reservations: Not Supported 00:24:14.042 Timestamp: Not Supported 00:24:14.042 Copy: Not Supported 00:24:14.042 Volatile Write Cache: Present 00:24:14.042 Atomic Write Unit (Normal): 1 00:24:14.042 Atomic Write Unit (PFail): 1 00:24:14.042 Atomic Compare & Write Unit: 1 00:24:14.042 Fused Compare & Write: Not Supported 00:24:14.042 Scatter-Gather List 00:24:14.042 SGL Command Set: Supported 00:24:14.042 SGL Keyed: Supported 00:24:14.042 SGL Bit Bucket Descriptor: Not Supported 00:24:14.042 SGL Metadata Pointer: Not Supported 00:24:14.042 Oversized SGL: Not Supported 00:24:14.042 SGL Metadata Address: Not Supported 00:24:14.042 SGL Offset: Supported 00:24:14.042 Transport SGL Data Block: Not Supported 00:24:14.042 Replay Protected Memory Block: Not Supported 00:24:14.042 00:24:14.042 Firmware Slot Information 00:24:14.042 ========================= 00:24:14.042 Active slot: 0 00:24:14.042 00:24:14.042 Asymmetric Namespace Access 00:24:14.042 =========================== 00:24:14.042 Change Count : 0 00:24:14.042 Number of ANA Group Descriptors : 1 00:24:14.042 ANA Group Descriptor : 0 00:24:14.042 ANA Group ID : 1 00:24:14.042 Number of NSID Values : 1 00:24:14.042 Change Count : 0 00:24:14.042 ANA State : 1 00:24:14.042 Namespace Identifier : 1 00:24:14.042 00:24:14.042 Commands Supported and Effects 00:24:14.042 ============================== 00:24:14.042 Admin Commands 00:24:14.042 -------------- 00:24:14.042 Get Log Page (02h): Supported 00:24:14.042 Identify (06h): Supported 00:24:14.042 Abort (08h): Supported 00:24:14.042 Set Features (09h): Supported 00:24:14.042 Get Features (0Ah): Supported 00:24:14.042 Asynchronous Event Request (0Ch): Supported 00:24:14.042 Keep Alive (18h): Supported 00:24:14.042 I/O Commands 00:24:14.042 ------------ 00:24:14.042 Flush (00h): Supported 00:24:14.042 Write (01h): Supported LBA-Change 00:24:14.042 Read (02h): Supported 00:24:14.042 Write Zeroes (08h): Supported LBA-Change 00:24:14.042 Dataset Management (09h): Supported 00:24:14.042 00:24:14.042 Error Log 00:24:14.042 ========= 00:24:14.042 Entry: 0 00:24:14.042 Error Count: 0x3 00:24:14.042 Submission Queue Id: 0x0 00:24:14.042 Command Id: 0x5 00:24:14.042 Phase Bit: 0 00:24:14.042 Status Code: 0x2 00:24:14.042 Status Code Type: 0x0 00:24:14.042 Do Not Retry: 1 00:24:14.042 Error Location: 0x28 00:24:14.042 LBA: 0x0 00:24:14.042 Namespace: 0x0 00:24:14.042 Vendor Log Page: 0x0 00:24:14.042 ----------- 00:24:14.042 Entry: 1 00:24:14.042 Error Count: 0x2 00:24:14.042 Submission Queue Id: 0x0 00:24:14.042 Command Id: 0x5 00:24:14.042 Phase Bit: 0 00:24:14.042 Status Code: 0x2 00:24:14.042 Status Code Type: 0x0 00:24:14.042 Do Not Retry: 1 00:24:14.042 Error Location: 0x28 00:24:14.042 LBA: 0x0 00:24:14.042 Namespace: 0x0 00:24:14.042 Vendor Log Page: 0x0 00:24:14.042 ----------- 00:24:14.042 Entry: 2 00:24:14.042 Error Count: 0x1 00:24:14.042 Submission Queue Id: 0x0 00:24:14.042 Command Id: 0x0 00:24:14.042 Phase Bit: 0 00:24:14.042 Status Code: 0x2 00:24:14.042 Status Code Type: 0x0 00:24:14.042 Do Not Retry: 1 00:24:14.042 Error Location: 0x28 00:24:14.042 LBA: 0x0 00:24:14.042 Namespace: 0x0 00:24:14.042 Vendor Log Page: 0x0 00:24:14.042 00:24:14.042 Number of Queues 00:24:14.042 ================ 00:24:14.042 Number of I/O Submission Queues: 128 00:24:14.042 Number of I/O Completion Queues: 128 00:24:14.042 00:24:14.042 ZNS Specific Controller Data 00:24:14.042 ============================ 00:24:14.042 Zone Append Size Limit: 0 00:24:14.042 00:24:14.042 00:24:14.042 Active Namespaces 00:24:14.042 ================= 00:24:14.042 get_feature(0x05) failed 00:24:14.042 Namespace ID:1 00:24:14.042 Command Set Identifier: NVM (00h) 00:24:14.042 Deallocate: Supported 00:24:14.042 Deallocated/Unwritten Error: Not Supported 00:24:14.042 Deallocated Read Value: Unknown 00:24:14.042 Deallocate in Write Zeroes: Not Supported 00:24:14.042 Deallocated Guard Field: 0xFFFF 00:24:14.042 Flush: Supported 00:24:14.042 Reservation: Not Supported 00:24:14.042 Namespace Sharing Capabilities: Multiple Controllers 00:24:14.042 Size (in LBAs): 15628053168 (7452GiB) 00:24:14.042 Capacity (in LBAs): 15628053168 (7452GiB) 00:24:14.042 Utilization (in LBAs): 15628053168 (7452GiB) 00:24:14.042 UUID: a82e988f-636e-49cb-a3f1-2cd2acc678ae 00:24:14.042 Thin Provisioning: Not Supported 00:24:14.042 Per-NS Atomic Units: Yes 00:24:14.042 Atomic Boundary Size (Normal): 0 00:24:14.042 Atomic Boundary Size (PFail): 0 00:24:14.042 Atomic Boundary Offset: 0 00:24:14.042 NGUID/EUI64 Never Reused: No 00:24:14.042 ANA group ID: 1 00:24:14.042 Namespace Write Protected: No 00:24:14.042 Number of LBA Formats: 1 00:24:14.042 Current LBA Format: LBA Format #00 00:24:14.042 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:14.042 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:14.302 rmmod nvme_rdma 00:24:14.302 rmmod nvme_fabrics 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:24:14.302 13:54:40 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:24:17.592 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:17.592 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:22.912 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:24:22.912 00:24:22.912 real 0m20.766s 00:24:22.912 user 0m4.804s 00:24:22.912 sys 0m10.363s 00:24:22.912 13:54:49 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:22.912 13:54:49 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.912 ************************************ 00:24:22.912 END TEST nvmf_identify_kernel_target 00:24:22.912 ************************************ 00:24:22.912 13:54:49 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:22.912 13:54:49 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:24:22.912 13:54:49 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:22.912 13:54:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.912 13:54:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:22.912 ************************************ 00:24:22.912 START TEST nvmf_auth_host 00:24:22.912 ************************************ 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:24:22.912 * Looking for test storage... 00:24:22.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:22.912 13:54:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:24:29.550 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:24:29.550 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:29.550 Found net devices under 0000:18:00.0: mlx_0_0 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.550 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:29.550 Found net devices under 0000:18:00.1: mlx_0_1 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:29.551 13:54:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:29.551 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:29.551 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:24:29.551 altname enp24s0f0np0 00:24:29.551 altname ens785f0np0 00:24:29.551 inet 192.168.100.8/24 scope global mlx_0_0 00:24:29.551 valid_lft forever preferred_lft forever 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:29.551 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:29.551 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:24:29.551 altname enp24s0f1np1 00:24:29.551 altname ens785f1np1 00:24:29.551 inet 192.168.100.9/24 scope global mlx_0_1 00:24:29.551 valid_lft forever preferred_lft forever 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:29.551 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:29.811 192.168.100.9' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:29.811 192.168.100.9' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:29.811 192.168.100.9' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2584854 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2584854 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2584854 ']' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.811 13:54:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1cd1c5de74f432d3f67544e039875d12 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u3P 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1cd1c5de74f432d3f67544e039875d12 0 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1cd1c5de74f432d3f67544e039875d12 0 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1cd1c5de74f432d3f67544e039875d12 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u3P 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u3P 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.u3P 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=19323867821737d5c3de2297ad8311ecbcace8117c3e36299d1e36b946ec182c 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pwn 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 19323867821737d5c3de2297ad8311ecbcace8117c3e36299d1e36b946ec182c 3 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 19323867821737d5c3de2297ad8311ecbcace8117c3e36299d1e36b946ec182c 3 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=19323867821737d5c3de2297ad8311ecbcace8117c3e36299d1e36b946ec182c 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pwn 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pwn 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Pwn 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f93044ce7c62a05bd91924f17345ace8c28f95b9900ebaa 00:24:30.748 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EjP 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f93044ce7c62a05bd91924f17345ace8c28f95b9900ebaa 0 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f93044ce7c62a05bd91924f17345ace8c28f95b9900ebaa 0 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f93044ce7c62a05bd91924f17345ace8c28f95b9900ebaa 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:30.749 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EjP 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EjP 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EjP 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70ecc8f58c29acb772a100c95bbb69a752806aef0a998073 00:24:31.007 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lEt 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70ecc8f58c29acb772a100c95bbb69a752806aef0a998073 2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70ecc8f58c29acb772a100c95bbb69a752806aef0a998073 2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70ecc8f58c29acb772a100c95bbb69a752806aef0a998073 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lEt 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lEt 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lEt 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6cab3b4fad47f9bec73195f86db71e89 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UAX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6cab3b4fad47f9bec73195f86db71e89 1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6cab3b4fad47f9bec73195f86db71e89 1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6cab3b4fad47f9bec73195f86db71e89 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UAX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UAX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UAX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1cfab2498a968c5cd7530363fc9789d6 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Ol 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1cfab2498a968c5cd7530363fc9789d6 1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1cfab2498a968c5cd7530363fc9789d6 1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1cfab2498a968c5cd7530363fc9789d6 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Ol 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Ol 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6Ol 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3cabf5b6c1a139f3af251518521a9967c8ed7512b35ab2e6 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fuy 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3cabf5b6c1a139f3af251518521a9967c8ed7512b35ab2e6 2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3cabf5b6c1a139f3af251518521a9967c8ed7512b35ab2e6 2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3cabf5b6c1a139f3af251518521a9967c8ed7512b35ab2e6 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:31.008 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fuy 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fuy 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fuy 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=86000016f8fa45d6317419fa1cc196b4 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gP8 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 86000016f8fa45d6317419fa1cc196b4 0 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 86000016f8fa45d6317419fa1cc196b4 0 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=86000016f8fa45d6317419fa1cc196b4 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gP8 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gP8 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gP8 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1163f5afa97a579067d79d0cd68919911492b0c231785b7628b67a29472652e6 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bzf 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1163f5afa97a579067d79d0cd68919911492b0c231785b7628b67a29472652e6 3 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1163f5afa97a579067d79d0cd68919911492b0c231785b7628b67a29472652e6 3 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1163f5afa97a579067d79d0cd68919911492b0c231785b7628b67a29472652e6 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bzf 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bzf 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bzf 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2584854 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2584854 ']' 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.267 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.526 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.526 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:31.526 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.526 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u3P 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Pwn ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Pwn 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EjP 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lEt ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lEt 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UAX 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6Ol ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Ol 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fuy 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gP8 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gP8 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bzf 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:31.527 13:54:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:24:34.809 Waiting for block devices as requested 00:24:34.809 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:24:34.809 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:34.809 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:35.068 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:35.068 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:35.068 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:35.327 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:35.327 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:35.327 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:35.585 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:35.585 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:35.585 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:35.843 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:35.843 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:35.843 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:36.101 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:36.101 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:37.036 No valid GPT data, bailing 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:37.036 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 --hostid=809f3706-e051-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:24:37.036 00:24:37.036 Discovery Log Number of Records 2, Generation counter 2 00:24:37.036 =====Discovery Log Entry 0====== 00:24:37.036 trtype: rdma 00:24:37.036 adrfam: ipv4 00:24:37.036 subtype: current discovery subsystem 00:24:37.036 treq: not specified, sq flow control disable supported 00:24:37.036 portid: 1 00:24:37.036 trsvcid: 4420 00:24:37.036 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:37.036 traddr: 192.168.100.8 00:24:37.036 eflags: none 00:24:37.036 rdma_prtype: not specified 00:24:37.037 rdma_qptype: connected 00:24:37.037 rdma_cms: rdma-cm 00:24:37.037 rdma_pkey: 0x0000 00:24:37.037 =====Discovery Log Entry 1====== 00:24:37.037 trtype: rdma 00:24:37.037 adrfam: ipv4 00:24:37.037 subtype: nvme subsystem 00:24:37.037 treq: not specified, sq flow control disable supported 00:24:37.037 portid: 1 00:24:37.037 trsvcid: 4420 00:24:37.037 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:37.037 traddr: 192.168.100.8 00:24:37.037 eflags: none 00:24:37.037 rdma_prtype: not specified 00:24:37.037 rdma_qptype: connected 00:24:37.037 rdma_cms: rdma-cm 00:24:37.037 rdma_pkey: 0x0000 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.037 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.297 nvme0n1 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.297 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.556 nvme0n1 00:24:37.556 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.556 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.556 13:55:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.556 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.556 13:55:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:37.556 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.815 nvme0n1 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.815 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.073 nvme0n1 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.073 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.331 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.332 nvme0n1 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.332 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.590 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.591 13:55:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.591 nvme0n1 00:24:38.591 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.591 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.849 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.108 nvme0n1 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:39.108 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.109 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.368 nvme0n1 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.368 13:55:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.627 nvme0n1 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.627 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.628 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.887 nvme0n1 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.887 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.145 nvme0n1 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.145 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.404 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.404 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.404 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.405 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.664 nvme0n1 00:24:40.664 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.664 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.664 13:55:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.664 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.664 13:55:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.664 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 nvme0n1 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.923 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.181 nvme0n1 00:24:41.181 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.440 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.441 13:55:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.700 nvme0n1 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.700 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.959 nvme0n1 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.959 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.218 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.476 nvme0n1 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.476 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.477 13:55:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.044 nvme0n1 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.044 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.610 nvme0n1 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.610 13:55:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.868 nvme0n1 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.868 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.127 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.128 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 nvme0n1 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.387 13:55:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.954 nvme0n1 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.954 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.213 13:55:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.781 nvme0n1 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.781 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.347 nvme0n1 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.347 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:46.604 13:55:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:46.605 13:55:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.605 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.605 13:55:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.171 nvme0n1 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.171 13:55:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 nvme0n1 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.794 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.051 nvme0n1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.051 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.308 nvme0n1 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:48.308 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.309 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.566 nvme0n1 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.566 13:55:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.566 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.823 nvme0n1 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.823 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.824 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.082 nvme0n1 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.082 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.340 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.341 nvme0n1 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.341 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.599 13:55:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.858 nvme0n1 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.858 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.117 nvme0n1 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.117 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 nvme0n1 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.375 13:55:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.633 nvme0n1 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.633 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.634 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.200 nvme0n1 00:24:51.200 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.201 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 nvme0n1 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.460 13:55:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.719 nvme0n1 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.719 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.977 nvme0n1 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.977 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.236 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.237 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.496 nvme0n1 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.496 13:55:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.062 nvme0n1 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.062 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.063 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.321 nvme0n1 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.321 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.579 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 13:55:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.851 nvme0n1 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.851 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 nvme0n1 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.418 13:55:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 nvme0n1 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:54.983 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.984 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.551 nvme0n1 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:55.551 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.552 13:55:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 nvme0n1 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.118 13:55:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.686 nvme0n1 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.686 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:56.945 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.946 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.946 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.511 nvme0n1 00:24:57.511 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.511 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.512 13:55:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.079 nvme0n1 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.079 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.337 nvme0n1 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.337 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.596 13:55:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.596 nvme0n1 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.596 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:58.855 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.856 nvme0n1 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.856 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.114 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 nvme0n1 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.115 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.373 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.374 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.374 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.632 nvme0n1 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.632 13:55:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.890 nvme0n1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.890 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.891 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.149 nvme0n1 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.149 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.408 nvme0n1 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.408 13:55:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.666 nvme0n1 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.666 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.667 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.667 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:00.667 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.930 nvme0n1 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.930 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.189 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.447 nvme0n1 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.447 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.448 13:55:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.705 nvme0n1 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.705 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.271 nvme0n1 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.271 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.530 nvme0n1 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.530 13:55:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.790 nvme0n1 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.790 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.357 nvme0n1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.357 13:55:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.924 nvme0n1 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.924 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.925 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.184 nvme0n1 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:04.184 13:55:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:04.443 13:55:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.443 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.443 13:55:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.703 nvme0n1 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.703 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.272 nvme0n1 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWNkMWM1ZGU3NGY0MzJkM2Y2NzU0NGUwMzk4NzVkMTJ2sDA9: 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTkzMjM4Njc4MjE3MzdkNWMzZGUyMjk3YWQ4MzExZWNiY2FjZTgxMTdjM2UzNjI5OWQxZTM2Yjk0NmVjMTgyY1iSD6s=: 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.272 13:55:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.841 nvme0n1 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.841 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.842 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 nvme0n1 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNhYjNiNGZhZDQ3ZjliZWM3MzE5NWY4NmRiNzFlODkLhJcI: 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: ]] 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWNmYWIyNDk4YTk2OGM1Y2Q3NTMwMzYzZmM5Nzg5ZDa1eOuo: 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.463 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.722 13:55:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.288 nvme0n1 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.288 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhYmY1YjZjMWExMzlmM2FmMjUxNTE4NTIxYTk5NjdjOGVkNzUxMmIzNWFiMmU2apPNxw==: 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYwMDAwMTZmOGZhNDVkNjMxNzQxOWZhMWNjMTk2YjSLWQXi: 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.289 13:55:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.856 nvme0n1 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTE2M2Y1YWZhOTdhNTc5MDY3ZDc5ZDBjZDY4OTE5OTExNDkyYjBjMjMxNzg1Yjc2MjhiNjdhMjk0NzI2NTJlNpq7uaU=: 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.856 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.857 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.424 nvme0n1 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.424 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Y5MzA0NGNlN2M2MmEwNWJkOTE5MjRmMTczNDVhY2U4YzI4Zjk1Yjk5MDBlYmFhzFEoCQ==: 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzBlY2M4ZjU4YzI5YWNiNzcyYTEwMGM5NWJiYjY5YTc1MjgwNmFlZjBhOTk4MDcztkNpEg==: 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.683 13:55:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.683 request: 00:25:08.683 { 00:25:08.683 "name": "nvme0", 00:25:08.683 "trtype": "rdma", 00:25:08.683 "traddr": "192.168.100.8", 00:25:08.683 "adrfam": "ipv4", 00:25:08.683 "trsvcid": "4420", 00:25:08.683 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:08.683 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:08.683 "prchk_reftag": false, 00:25:08.684 "prchk_guard": false, 00:25:08.684 "hdgst": false, 00:25:08.684 "ddgst": false, 00:25:08.684 "method": "bdev_nvme_attach_controller", 00:25:08.684 "req_id": 1 00:25:08.684 } 00:25:08.684 Got JSON-RPC error response 00:25:08.684 response: 00:25:08.684 { 00:25:08.684 "code": -5, 00:25:08.684 "message": "Input/output error" 00:25:08.684 } 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.684 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.943 request: 00:25:08.943 { 00:25:08.943 "name": "nvme0", 00:25:08.943 "trtype": "rdma", 00:25:08.943 "traddr": "192.168.100.8", 00:25:08.943 "adrfam": "ipv4", 00:25:08.943 "trsvcid": "4420", 00:25:08.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:08.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:08.943 "prchk_reftag": false, 00:25:08.943 "prchk_guard": false, 00:25:08.943 "hdgst": false, 00:25:08.943 "ddgst": false, 00:25:08.943 "dhchap_key": "key2", 00:25:08.943 "method": "bdev_nvme_attach_controller", 00:25:08.943 "req_id": 1 00:25:08.943 } 00:25:08.943 Got JSON-RPC error response 00:25:08.943 response: 00:25:08.943 { 00:25:08.943 "code": -5, 00:25:08.943 "message": "Input/output error" 00:25:08.943 } 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.943 request: 00:25:08.943 { 00:25:08.943 "name": "nvme0", 00:25:08.943 "trtype": "rdma", 00:25:08.943 "traddr": "192.168.100.8", 00:25:08.943 "adrfam": "ipv4", 00:25:08.943 "trsvcid": "4420", 00:25:08.943 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:08.943 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:08.943 "prchk_reftag": false, 00:25:08.943 "prchk_guard": false, 00:25:08.943 "hdgst": false, 00:25:08.943 "ddgst": false, 00:25:08.943 "dhchap_key": "key1", 00:25:08.943 "dhchap_ctrlr_key": "ckey2", 00:25:08.943 "method": "bdev_nvme_attach_controller", 00:25:08.943 "req_id": 1 00:25:08.943 } 00:25:08.943 Got JSON-RPC error response 00:25:08.943 response: 00:25:08.943 { 00:25:08.943 "code": -5, 00:25:08.943 "message": "Input/output error" 00:25:08.943 } 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.943 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:08.943 rmmod nvme_rdma 00:25:08.943 rmmod nvme_fabrics 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2584854 ']' 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2584854 ']' 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2584854' 00:25:09.203 killing process with pid 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2584854 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:09.203 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:09.462 13:55:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:12.753 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:12.753 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:18.022 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:25:18.022 13:55:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.u3P /tmp/spdk.key-null.EjP /tmp/spdk.key-sha256.UAX /tmp/spdk.key-sha384.fuy /tmp/spdk.key-sha512.bzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:25:18.022 13:55:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:21.312 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:21.312 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:21.312 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:21.313 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:21.313 00:25:21.313 real 0m58.381s 00:25:21.313 user 0m44.072s 00:25:21.313 sys 0m15.754s 00:25:21.313 13:55:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.313 13:55:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.313 ************************************ 00:25:21.313 END TEST nvmf_auth_host 00:25:21.313 ************************************ 00:25:21.313 13:55:47 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:21.313 13:55:47 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:25:21.313 13:55:47 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:21.313 13:55:47 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:21.313 13:55:47 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:21.313 13:55:47 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:25:21.313 13:55:47 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:21.313 13:55:47 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.313 13:55:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:21.313 ************************************ 00:25:21.313 START TEST nvmf_bdevperf 00:25:21.313 ************************************ 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:25:21.313 * Looking for test storage... 00:25:21.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.313 13:55:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.886 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:27.887 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:27.887 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:27.887 Found net devices under 0000:18:00.0: mlx_0_0 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:27.887 Found net devices under 0000:18:00.1: mlx_0_1 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:27.887 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:28.146 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:28.147 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:28.147 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:25:28.147 altname enp24s0f0np0 00:25:28.147 altname ens785f0np0 00:25:28.147 inet 192.168.100.8/24 scope global mlx_0_0 00:25:28.147 valid_lft forever preferred_lft forever 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:28.147 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:28.147 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:25:28.147 altname enp24s0f1np1 00:25:28.147 altname ens785f1np1 00:25:28.147 inet 192.168.100.9/24 scope global mlx_0_1 00:25:28.147 valid_lft forever preferred_lft forever 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:28.147 192.168.100.9' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:28.147 192.168.100.9' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:28.147 192.168.100.9' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2596820 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2596820 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2596820 ']' 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.147 13:55:54 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:28.405 [2024-07-15 13:55:54.696499] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:28.405 [2024-07-15 13:55:54.696572] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.405 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.405 [2024-07-15 13:55:54.786022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:28.405 [2024-07-15 13:55:54.875545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.405 [2024-07-15 13:55:54.875594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.405 [2024-07-15 13:55:54.875603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.405 [2024-07-15 13:55:54.875611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.405 [2024-07-15 13:55:54.875618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.405 [2024-07-15 13:55:54.875692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.405 [2024-07-15 13:55:54.875797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.405 [2024-07-15 13:55:54.875797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 [2024-07-15 13:55:55.592241] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dd4a80/0x1dd8f70) succeed. 00:25:29.338 [2024-07-15 13:55:55.601678] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dd6020/0x1e1a600) succeed. 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 Malloc0 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.338 [2024-07-15 13:55:55.758361] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:29.338 { 00:25:29.338 "params": { 00:25:29.338 "name": "Nvme$subsystem", 00:25:29.338 "trtype": "$TEST_TRANSPORT", 00:25:29.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.338 "adrfam": "ipv4", 00:25:29.338 "trsvcid": "$NVMF_PORT", 00:25:29.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.338 "hdgst": ${hdgst:-false}, 00:25:29.338 "ddgst": ${ddgst:-false} 00:25:29.338 }, 00:25:29.338 "method": "bdev_nvme_attach_controller" 00:25:29.338 } 00:25:29.338 EOF 00:25:29.338 )") 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:29.338 13:55:55 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:29.338 "params": { 00:25:29.338 "name": "Nvme1", 00:25:29.338 "trtype": "rdma", 00:25:29.338 "traddr": "192.168.100.8", 00:25:29.338 "adrfam": "ipv4", 00:25:29.338 "trsvcid": "4420", 00:25:29.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.338 "hdgst": false, 00:25:29.338 "ddgst": false 00:25:29.338 }, 00:25:29.338 "method": "bdev_nvme_attach_controller" 00:25:29.338 }' 00:25:29.338 [2024-07-15 13:55:55.810734] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:29.338 [2024-07-15 13:55:55.810795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596964 ] 00:25:29.338 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.596 [2024-07-15 13:55:55.896329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.596 [2024-07-15 13:55:55.979350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.852 Running I/O for 1 seconds... 00:25:30.784 00:25:30.784 Latency(us) 00:25:30.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.784 Verification LBA range: start 0x0 length 0x4000 00:25:30.784 Nvme1n1 : 1.00 17875.35 69.83 0.00 0.00 7115.76 1040.03 11853.47 00:25:30.784 =================================================================================================================== 00:25:30.784 Total : 17875.35 69.83 0.00 0.00 7115.76 1040.03 11853.47 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2597209 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:31.041 { 00:25:31.041 "params": { 00:25:31.041 "name": "Nvme$subsystem", 00:25:31.041 "trtype": "$TEST_TRANSPORT", 00:25:31.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.041 "adrfam": "ipv4", 00:25:31.041 "trsvcid": "$NVMF_PORT", 00:25:31.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.041 "hdgst": ${hdgst:-false}, 00:25:31.041 "ddgst": ${ddgst:-false} 00:25:31.041 }, 00:25:31.041 "method": "bdev_nvme_attach_controller" 00:25:31.041 } 00:25:31.041 EOF 00:25:31.041 )") 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:31.041 13:55:57 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:31.041 "params": { 00:25:31.041 "name": "Nvme1", 00:25:31.041 "trtype": "rdma", 00:25:31.041 "traddr": "192.168.100.8", 00:25:31.041 "adrfam": "ipv4", 00:25:31.041 "trsvcid": "4420", 00:25:31.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.041 "hdgst": false, 00:25:31.041 "ddgst": false 00:25:31.041 }, 00:25:31.041 "method": "bdev_nvme_attach_controller" 00:25:31.041 }' 00:25:31.041 [2024-07-15 13:55:57.451433] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:31.041 [2024-07-15 13:55:57.451495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597209 ] 00:25:31.041 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.041 [2024-07-15 13:55:57.538644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.298 [2024-07-15 13:55:57.624216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.298 Running I/O for 15 seconds... 00:25:34.578 13:56:00 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2596820 00:25:34.578 13:56:00 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:35.145 [2024-07-15 13:56:01.442473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.145 [2024-07-15 13:56:01.442517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.145 [2024-07-15 13:56:01.442538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.442989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.442999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.146 [2024-07-15 13:56:01.443134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.146 [2024-07-15 13:56:01.443373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:25:35.146 [2024-07-15 13:56:01.443382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.443981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.443992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:25:35.147 [2024-07-15 13:56:01.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.147 [2024-07-15 13:56:01.444180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.148 [2024-07-15 13:56:01.444962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182b00 00:25:35.148 [2024-07-15 13:56:01.444972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.444983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.444993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.445004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.445013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.445024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.445034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.445045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.445054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.455310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.455322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.455331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.455342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.455351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.455362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182b00 00:25:35.149 [2024-07-15 13:56:01.455372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:400e2000 sqhd:52b0 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.457296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:35.149 [2024-07-15 13:56:01.457310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:35.149 [2024-07-15 13:56:01.457320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112392 len:8 PRP1 0x0 PRP2 0x0 00:25:35.149 [2024-07-15 13:56:01.457330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.457377] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:25:35.149 [2024-07-15 13:56:01.457410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.149 [2024-07-15 13:56:01.457420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.457431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.149 [2024-07-15 13:56:01.457440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.457450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.149 [2024-07-15 13:56:01.457459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.457468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.149 [2024-07-15 13:56:01.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.149 [2024-07-15 13:56:01.474770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:35.149 [2024-07-15 13:56:01.474825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:35.149 [2024-07-15 13:56:01.474857] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:35.149 [2024-07-15 13:56:01.477874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.149 [2024-07-15 13:56:01.481171] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:35.149 [2024-07-15 13:56:01.481193] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:35.149 [2024-07-15 13:56:01.481205] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:36.083 [2024-07-15 13:56:02.485234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:36.083 [2024-07-15 13:56:02.485293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:36.083 [2024-07-15 13:56:02.485609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:36.083 [2024-07-15 13:56:02.485620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:36.083 [2024-07-15 13:56:02.485631] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:36.083 [2024-07-15 13:56:02.487872] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:36.083 [2024-07-15 13:56:02.488342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:36.083 [2024-07-15 13:56:02.500644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:36.083 [2024-07-15 13:56:02.503067] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:36.083 [2024-07-15 13:56:02.503087] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:36.083 [2024-07-15 13:56:02.503095] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:37.059 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2596820 Killed "${NVMF_APP[@]}" "$@" 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2597950 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2597950 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2597950 ']' 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.059 13:56:03 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.059 [2024-07-15 13:56:03.471010] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:37.059 [2024-07-15 13:56:03.471066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.059 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.059 [2024-07-15 13:56:03.507120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:37.060 [2024-07-15 13:56:03.507155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.060 [2024-07-15 13:56:03.507334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.060 [2024-07-15 13:56:03.507345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.060 [2024-07-15 13:56:03.507361] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:37.060 [2024-07-15 13:56:03.510118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:37.060 [2024-07-15 13:56:03.513692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.060 [2024-07-15 13:56:03.516250] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:37.060 [2024-07-15 13:56:03.516273] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:37.060 [2024-07-15 13:56:03.516282] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:37.348 [2024-07-15 13:56:03.560548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:37.348 [2024-07-15 13:56:03.647663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.348 [2024-07-15 13:56:03.647704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.348 [2024-07-15 13:56:03.647714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.349 [2024-07-15 13:56:03.647723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.349 [2024-07-15 13:56:03.647731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.349 [2024-07-15 13:56:03.647801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.349 [2024-07-15 13:56:03.647886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.349 [2024-07-15 13:56:03.647887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.913 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.913 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:37.913 13:56:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.913 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.913 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.914 13:56:04 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.914 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:37.914 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.914 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:37.914 [2024-07-15 13:56:04.379746] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13e0a80/0x13e4f70) succeed. 00:25:37.914 [2024-07-15 13:56:04.389466] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13e2020/0x1426600) succeed. 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 [2024-07-15 13:56:04.520259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:38.171 [2024-07-15 13:56:04.520304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:38.171 [2024-07-15 13:56:04.520485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:38.171 [2024-07-15 13:56:04.520496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:38.171 [2024-07-15 13:56:04.520508] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:38.171 [2024-07-15 13:56:04.520535] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:38.171 [2024-07-15 13:56:04.523289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:38.171 Malloc0 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 [2024-07-15 13:56:04.533540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:38.171 [2024-07-15 13:56:04.536103] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:38.171 [2024-07-15 13:56:04.536125] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:38.171 [2024-07-15 13:56:04.536134] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 [2024-07-15 13:56:04.555703] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 13:56:04 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2597209 00:25:39.103 [2024-07-15 13:56:05.539920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:39.103 [2024-07-15 13:56:05.539943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:39.103 [2024-07-15 13:56:05.540119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:39.103 [2024-07-15 13:56:05.540130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:39.103 [2024-07-15 13:56:05.540140] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:39.103 [2024-07-15 13:56:05.542896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:39.103 [2024-07-15 13:56:05.546799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:39.103 [2024-07-15 13:56:05.587141] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:47.207 00:25:47.207 Latency(us) 00:25:47.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:47.207 Verification LBA range: start 0x0 length 0x4000 00:25:47.207 Nvme1n1 : 15.00 11823.45 46.19 13738.82 0.00 4986.58 340.15 1072282.94 00:25:47.207 =================================================================================================================== 00:25:47.207 Total : 11823.45 46.19 13738.82 0.00 4986.58 340.15 1072282.94 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:47.207 rmmod nvme_rdma 00:25:47.207 rmmod nvme_fabrics 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2597950 ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2597950 ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2597950' 00:25:47.207 killing process with pid 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2597950 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:47.207 00:25:47.207 real 0m25.795s 00:25:47.207 user 1m4.799s 00:25:47.207 sys 0m6.634s 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:47.207 13:56:13 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:47.207 ************************************ 00:25:47.207 END TEST nvmf_bdevperf 00:25:47.207 ************************************ 00:25:47.207 13:56:13 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:47.207 13:56:13 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:47.207 13:56:13 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:47.207 13:56:13 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.207 13:56:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:47.207 ************************************ 00:25:47.207 START TEST nvmf_target_disconnect 00:25:47.207 ************************************ 00:25:47.207 13:56:13 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:25:47.207 * Looking for test storage... 00:25:47.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:47.207 13:56:13 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.207 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:47.207 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.208 13:56:13 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:25:53.777 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:25:53.777 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:25:53.777 Found net devices under 0000:18:00.0: mlx_0_0 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:25:53.777 Found net devices under 0000:18:00.1: mlx_0_1 00:25:53.777 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:53.778 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:54.038 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:54.038 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:25:54.038 altname enp24s0f0np0 00:25:54.038 altname ens785f0np0 00:25:54.038 inet 192.168.100.8/24 scope global mlx_0_0 00:25:54.038 valid_lft forever preferred_lft forever 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:54.038 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:54.038 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:25:54.038 altname enp24s0f1np1 00:25:54.038 altname ens785f1np1 00:25:54.038 inet 192.168.100.9/24 scope global mlx_0_1 00:25:54.038 valid_lft forever preferred_lft forever 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:54.038 192.168.100.9' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:54.038 192.168.100.9' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:54.038 192.168.100.9' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:54.038 ************************************ 00:25:54.038 START TEST nvmf_target_disconnect_tc1 00:25:54.038 ************************************ 00:25:54.038 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:25:54.039 13:56:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:54.300 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.300 [2024-07-15 13:56:20.655949] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:25:54.300 [2024-07-15 13:56:20.656049] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:25:54.300 [2024-07-15 13:56:20.656077] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:25:55.236 [2024-07-15 13:56:21.660049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:55.237 [2024-07-15 13:56:21.660114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:55.237 [2024-07-15 13:56:21.660149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:25:55.237 [2024-07-15 13:56:21.660212] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:55.237 [2024-07-15 13:56:21.660243] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:55.237 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:25:55.237 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:55.237 Initializing NVMe Controllers 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:55.237 00:25:55.237 real 0m1.154s 00:25:55.237 user 0m0.883s 00:25:55.237 sys 0m0.259s 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.237 ************************************ 00:25:55.237 END TEST nvmf_target_disconnect_tc1 00:25:55.237 ************************************ 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:55.237 ************************************ 00:25:55.237 START TEST nvmf_target_disconnect_tc2 00:25:55.237 ************************************ 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.237 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2602338 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2602338 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2602338 ']' 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.495 13:56:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.495 [2024-07-15 13:56:21.816462] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:25:55.495 [2024-07-15 13:56:21.816520] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.495 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.495 [2024-07-15 13:56:21.901764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.495 [2024-07-15 13:56:21.992395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.495 [2024-07-15 13:56:21.992443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.495 [2024-07-15 13:56:21.992453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.495 [2024-07-15 13:56:21.992461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.495 [2024-07-15 13:56:21.992468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.495 [2024-07-15 13:56:21.992642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:55.495 [2024-07-15 13:56:21.992744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:55.495 [2024-07-15 13:56:21.992845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.495 [2024-07-15 13:56:21.992846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 Malloc0 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 [2024-07-15 13:56:22.735297] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa2200/0x1aadf80) succeed. 00:25:56.425 [2024-07-15 13:56:22.745172] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa3840/0x1aef610) succeed. 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 [2024-07-15 13:56:22.890777] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2602539 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:56.425 13:56:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:56.682 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.577 13:56:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2602338 00:25:58.577 13:56:24 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:59.952 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Read completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 Write completed with error (sct=0, sc=8) 00:25:59.953 starting I/O failed 00:25:59.953 [2024-07-15 13:56:26.101295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:00.520 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2602338 Killed "${NVMF_APP[@]}" "$@" 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2603028 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2603028 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2603028 ']' 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.520 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.521 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.521 13:56:26 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 [2024-07-15 13:56:26.971930] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:00.521 [2024-07-15 13:56:26.971990] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.521 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.778 [2024-07-15 13:56:27.061436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.778 Read completed with error (sct=0, sc=8) 00:26:00.778 starting I/O failed 00:26:00.778 Read completed with error (sct=0, sc=8) 00:26:00.778 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Write completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 Read completed with error (sct=0, sc=8) 00:26:00.779 starting I/O failed 00:26:00.779 [2024-07-15 13:56:27.106547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.779 [2024-07-15 13:56:27.149616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.779 [2024-07-15 13:56:27.149662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.779 [2024-07-15 13:56:27.149671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.779 [2024-07-15 13:56:27.149680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.779 [2024-07-15 13:56:27.149687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.779 [2024-07-15 13:56:27.149805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:00.779 [2024-07-15 13:56:27.149911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:00.779 [2024-07-15 13:56:27.150009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:00.779 [2024-07-15 13:56:27.150010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.344 Malloc0 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.344 13:56:27 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 [2024-07-15 13:56:27.889277] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bbf200/0x1bcaf80) succeed. 00:26:01.602 [2024-07-15 13:56:27.899096] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bc0840/0x1c0c610) succeed. 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 [2024-07-15 13:56:28.050948] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.602 13:56:28 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2602539 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Read completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.602 Write completed with error (sct=0, sc=8) 00:26:01.602 starting I/O failed 00:26:01.603 [2024-07-15 13:56:28.111587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.603 [2024-07-15 13:56:28.124935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.603 [2024-07-15 13:56:28.124995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.603 [2024-07-15 13:56:28.125016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.603 [2024-07-15 13:56:28.125028] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.603 [2024-07-15 13:56:28.125041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.135044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.144840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.144882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.144901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.144911] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.144920] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.155140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.164820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.164861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.164879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.164889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.164898] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.175140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.184945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.184989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.185007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.185017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.185026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.195345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.205080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.205127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.205144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.205154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.205162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.215443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.225115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.225153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.225171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.225180] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.225189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.860 [2024-07-15 13:56:28.235372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.860 qpair failed and we were unable to recover it. 00:26:01.860 [2024-07-15 13:56:28.245144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.860 [2024-07-15 13:56:28.245182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.860 [2024-07-15 13:56:28.245200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.860 [2024-07-15 13:56:28.245209] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.860 [2024-07-15 13:56:28.245218] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.255436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.265206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.265248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.265266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.265275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.265284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.275614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.285205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.285246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.285264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.285274] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.285285] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.295542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.305437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.305478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.305495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.305508] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.305517] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.315684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.325347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.325387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.325404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.325413] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.325422] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.335637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.345435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.345473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.345490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.345500] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.345509] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.355898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.365509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.365547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.365568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.365578] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.365587] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:01.861 [2024-07-15 13:56:28.375650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:01.861 qpair failed and we were unable to recover it. 00:26:01.861 [2024-07-15 13:56:28.385595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.861 [2024-07-15 13:56:28.385635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.861 [2024-07-15 13:56:28.385652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.861 [2024-07-15 13:56:28.385662] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.861 [2024-07-15 13:56:28.385671] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.395792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.405644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.405685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.405702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.405712] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.405721] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.416061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.425768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.425808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.425825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.425835] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.425844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.436174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.445720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.445771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.445788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.445798] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.445806] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.455966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.465845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.465887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.465905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.465914] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.465923] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.476050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.485924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.485962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.485984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.485994] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.486002] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.496247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.505970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.506009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.506026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.506036] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.506045] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.516260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.526005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.526046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.526063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.526073] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.526082] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.536381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.545987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.546029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.546046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.546055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.546064] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.556389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.566126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.566164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.566180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.566190] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.566202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.576428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.586140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.586178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.586196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.586205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.586214] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.119 [2024-07-15 13:56:28.596460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.119 qpair failed and we were unable to recover it. 00:26:02.119 [2024-07-15 13:56:28.606270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.119 [2024-07-15 13:56:28.606330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.119 [2024-07-15 13:56:28.606348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.119 [2024-07-15 13:56:28.606359] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.119 [2024-07-15 13:56:28.606368] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.120 [2024-07-15 13:56:28.616560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.120 qpair failed and we were unable to recover it. 00:26:02.120 [2024-07-15 13:56:28.626388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.120 [2024-07-15 13:56:28.626428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.120 [2024-07-15 13:56:28.626445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.120 [2024-07-15 13:56:28.626454] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.120 [2024-07-15 13:56:28.626463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.120 [2024-07-15 13:56:28.636663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.120 qpair failed and we were unable to recover it. 00:26:02.377 [2024-07-15 13:56:28.646285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.646328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.646345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.646354] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.646363] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.656736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.666407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.666447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.666465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.666474] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.666483] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.676761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.686451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.686496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.686514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.686523] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.686532] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.696911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.706472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.706513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.706531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.706540] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.706549] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.716839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.726632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.726670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.726688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.726697] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.726706] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.737008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.746777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.746820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.746837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.746850] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.746859] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.756964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.766806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.766850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.766868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.766877] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.766886] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.777138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.786790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.786826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.786843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.786852] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.786861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.797090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.806882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.806915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.806933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.378 [2024-07-15 13:56:28.806942] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.378 [2024-07-15 13:56:28.806951] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.378 [2024-07-15 13:56:28.817279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.378 qpair failed and we were unable to recover it. 00:26:02.378 [2024-07-15 13:56:28.826919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.378 [2024-07-15 13:56:28.826960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.378 [2024-07-15 13:56:28.826977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.379 [2024-07-15 13:56:28.826986] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.379 [2024-07-15 13:56:28.826995] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.379 [2024-07-15 13:56:28.837198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.379 qpair failed and we were unable to recover it. 00:26:02.379 [2024-07-15 13:56:28.846848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.379 [2024-07-15 13:56:28.846892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.379 [2024-07-15 13:56:28.846909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.379 [2024-07-15 13:56:28.846918] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.379 [2024-07-15 13:56:28.846927] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.379 [2024-07-15 13:56:28.857069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.379 qpair failed and we were unable to recover it. 00:26:02.379 [2024-07-15 13:56:28.866947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.379 [2024-07-15 13:56:28.866985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.379 [2024-07-15 13:56:28.867001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.379 [2024-07-15 13:56:28.867011] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.379 [2024-07-15 13:56:28.867020] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.379 [2024-07-15 13:56:28.877458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.379 qpair failed and we were unable to recover it. 00:26:02.379 [2024-07-15 13:56:28.887026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.379 [2024-07-15 13:56:28.887066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.379 [2024-07-15 13:56:28.887084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.379 [2024-07-15 13:56:28.887093] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.379 [2024-07-15 13:56:28.887102] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.379 [2024-07-15 13:56:28.897380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.379 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:28.907134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:28.907192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:28.907211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:28.907221] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:28.907231] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:28.917355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:28.927175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:28.927216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:28.927236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:28.927246] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:28.927254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:28.937519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:28.947154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:28.947194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:28.947211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:28.947221] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:28.947229] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:28.957516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:28.967255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:28.967292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:28.967309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:28.967318] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:28.967327] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:28.977794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:28.987257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:28.987297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:28.987314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:28.987323] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:28.987332] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:28.997656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:29.007310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:29.007356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:29.007372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:29.007382] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:29.007394] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:29.017731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:29.027404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:29.027441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:29.027457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:29.027467] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:29.027476] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:29.037682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:29.047477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:29.047515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:29.047533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:29.047542] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:29.047551] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:29.057821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:29.067475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:29.067515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:29.067533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:29.067542] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:29.067551] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:29.077839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.637 [2024-07-15 13:56:29.087616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.637 [2024-07-15 13:56:29.087660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.637 [2024-07-15 13:56:29.087678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.637 [2024-07-15 13:56:29.087687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.637 [2024-07-15 13:56:29.087696] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.637 [2024-07-15 13:56:29.098120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.637 qpair failed and we were unable to recover it. 00:26:02.638 [2024-07-15 13:56:29.107649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.638 [2024-07-15 13:56:29.107684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.638 [2024-07-15 13:56:29.107702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.638 [2024-07-15 13:56:29.107712] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.638 [2024-07-15 13:56:29.107720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.638 [2024-07-15 13:56:29.117987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.638 qpair failed and we were unable to recover it. 00:26:02.638 [2024-07-15 13:56:29.127756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.638 [2024-07-15 13:56:29.127792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.638 [2024-07-15 13:56:29.127809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.638 [2024-07-15 13:56:29.127819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.638 [2024-07-15 13:56:29.127827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.638 [2024-07-15 13:56:29.138106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.638 qpair failed and we were unable to recover it. 00:26:02.638 [2024-07-15 13:56:29.147755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.638 [2024-07-15 13:56:29.147792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.638 [2024-07-15 13:56:29.147809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.638 [2024-07-15 13:56:29.147819] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.638 [2024-07-15 13:56:29.147827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.638 [2024-07-15 13:56:29.158176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.638 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.167774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.167818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.167836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.167845] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.167854] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.178209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.187869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.187905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.187923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.187936] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.187945] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.198342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.207882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.207921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.207938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.207948] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.207957] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.218119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.228009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.228051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.228068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.228078] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.228087] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.238494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.248103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.248142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.248159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.248168] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.248177] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.258457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.268092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.268127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.268144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.268153] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.268162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.278466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.288219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.288254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.288272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.288281] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.288290] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.298508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.308263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.308301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.308318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.308328] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.308336] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.318585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.328260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.328299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.328316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.328326] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.328335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.338730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.348434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.348476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.348492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.348502] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.348511] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.358819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.368441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.368479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.896 [2024-07-15 13:56:29.368500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.896 [2024-07-15 13:56:29.368509] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.896 [2024-07-15 13:56:29.368518] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.896 [2024-07-15 13:56:29.378865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.896 qpair failed and we were unable to recover it. 00:26:02.896 [2024-07-15 13:56:29.388388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.896 [2024-07-15 13:56:29.388428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.897 [2024-07-15 13:56:29.388446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.897 [2024-07-15 13:56:29.388455] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.897 [2024-07-15 13:56:29.388464] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.897 [2024-07-15 13:56:29.398984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.897 qpair failed and we were unable to recover it. 00:26:02.897 [2024-07-15 13:56:29.408558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.897 [2024-07-15 13:56:29.408608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.897 [2024-07-15 13:56:29.408625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.897 [2024-07-15 13:56:29.408634] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.897 [2024-07-15 13:56:29.408643] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:02.897 [2024-07-15 13:56:29.418796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:02.897 qpair failed and we were unable to recover it. 00:26:03.154 [2024-07-15 13:56:29.428658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.154 [2024-07-15 13:56:29.428698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.154 [2024-07-15 13:56:29.428715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.154 [2024-07-15 13:56:29.428725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.154 [2024-07-15 13:56:29.428733] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.154 [2024-07-15 13:56:29.438924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.154 qpair failed and we were unable to recover it. 00:26:03.154 [2024-07-15 13:56:29.448782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.154 [2024-07-15 13:56:29.448836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.154 [2024-07-15 13:56:29.448854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.154 [2024-07-15 13:56:29.448865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.154 [2024-07-15 13:56:29.448877] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.154 [2024-07-15 13:56:29.459195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.154 qpair failed and we were unable to recover it. 00:26:03.154 [2024-07-15 13:56:29.468692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.154 [2024-07-15 13:56:29.468732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.154 [2024-07-15 13:56:29.468749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.154 [2024-07-15 13:56:29.468758] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.154 [2024-07-15 13:56:29.468767] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.154 [2024-07-15 13:56:29.479060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.154 qpair failed and we were unable to recover it. 00:26:03.154 [2024-07-15 13:56:29.488893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.154 [2024-07-15 13:56:29.488938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.154 [2024-07-15 13:56:29.488956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.154 [2024-07-15 13:56:29.488966] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.154 [2024-07-15 13:56:29.488974] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.154 [2024-07-15 13:56:29.499054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.154 qpair failed and we were unable to recover it. 00:26:03.154 [2024-07-15 13:56:29.508795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.154 [2024-07-15 13:56:29.508840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.154 [2024-07-15 13:56:29.508857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.154 [2024-07-15 13:56:29.508867] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.154 [2024-07-15 13:56:29.508875] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.519045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.528995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.529034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.529051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.529060] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.529069] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.539238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.548859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.548900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.548918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.548927] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.548936] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.559292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.569061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.569107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.569124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.569133] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.569142] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.579370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.589062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.589103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.589120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.589130] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.589139] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.599371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.609190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.609230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.609247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.609257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.609266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.619409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.629234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.629274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.629292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.629305] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.629314] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.639541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.649212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.649252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.649269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.649279] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.649287] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.659585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.155 [2024-07-15 13:56:29.669251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.155 [2024-07-15 13:56:29.669290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.155 [2024-07-15 13:56:29.669307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.155 [2024-07-15 13:56:29.669317] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.155 [2024-07-15 13:56:29.669325] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.155 [2024-07-15 13:56:29.679446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.155 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.689313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.689353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.689371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.689381] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.689389] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.699762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.709421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.709460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.709478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.709487] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.709496] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.719775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.729481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.729528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.729546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.729555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.729569] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.739580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.749589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.749625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.749643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.749652] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.749661] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.759967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.769506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.769549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.769571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.769581] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.769590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.779814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.789559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.789606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.789623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.789633] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.789642] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.799951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.809660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.809707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.809728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.809738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.809746] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.819988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.829692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.413 [2024-07-15 13:56:29.829731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.413 [2024-07-15 13:56:29.829748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.413 [2024-07-15 13:56:29.829758] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.413 [2024-07-15 13:56:29.829766] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.413 [2024-07-15 13:56:29.840190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.413 qpair failed and we were unable to recover it. 00:26:03.413 [2024-07-15 13:56:29.849737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.414 [2024-07-15 13:56:29.849779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.414 [2024-07-15 13:56:29.849796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.414 [2024-07-15 13:56:29.849805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.414 [2024-07-15 13:56:29.849814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.414 [2024-07-15 13:56:29.859956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.414 qpair failed and we were unable to recover it. 00:26:03.414 [2024-07-15 13:56:29.869781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.414 [2024-07-15 13:56:29.869836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.414 [2024-07-15 13:56:29.869854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.414 [2024-07-15 13:56:29.869865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.414 [2024-07-15 13:56:29.869875] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.414 [2024-07-15 13:56:29.880126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.414 qpair failed and we were unable to recover it. 00:26:03.414 [2024-07-15 13:56:29.889850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.414 [2024-07-15 13:56:29.889892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.414 [2024-07-15 13:56:29.889909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.414 [2024-07-15 13:56:29.889919] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.414 [2024-07-15 13:56:29.889931] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.414 [2024-07-15 13:56:29.900187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.414 qpair failed and we were unable to recover it. 00:26:03.414 [2024-07-15 13:56:29.909921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.414 [2024-07-15 13:56:29.909960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.414 [2024-07-15 13:56:29.909977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.414 [2024-07-15 13:56:29.909986] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.414 [2024-07-15 13:56:29.909995] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.414 [2024-07-15 13:56:29.920268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.414 qpair failed and we were unable to recover it. 00:26:03.414 [2024-07-15 13:56:29.929981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.414 [2024-07-15 13:56:29.930017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.414 [2024-07-15 13:56:29.930034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.414 [2024-07-15 13:56:29.930043] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.414 [2024-07-15 13:56:29.930052] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.672 [2024-07-15 13:56:29.940287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.672 qpair failed and we were unable to recover it. 00:26:03.672 [2024-07-15 13:56:29.950016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.672 [2024-07-15 13:56:29.950057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.672 [2024-07-15 13:56:29.950074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.672 [2024-07-15 13:56:29.950084] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.672 [2024-07-15 13:56:29.950093] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.672 [2024-07-15 13:56:29.960405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.672 qpair failed and we were unable to recover it. 00:26:03.672 [2024-07-15 13:56:29.970221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.672 [2024-07-15 13:56:29.970259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.672 [2024-07-15 13:56:29.970276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.672 [2024-07-15 13:56:29.970286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.672 [2024-07-15 13:56:29.970294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.672 [2024-07-15 13:56:29.980400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.672 qpair failed and we were unable to recover it. 00:26:03.672 [2024-07-15 13:56:29.990292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.672 [2024-07-15 13:56:29.990330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.672 [2024-07-15 13:56:29.990349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.672 [2024-07-15 13:56:29.990359] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.672 [2024-07-15 13:56:29.990367] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.672 [2024-07-15 13:56:30.000462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.672 qpair failed and we were unable to recover it. 00:26:03.672 [2024-07-15 13:56:30.010242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.672 [2024-07-15 13:56:30.010279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.672 [2024-07-15 13:56:30.010296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.672 [2024-07-15 13:56:30.010306] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.672 [2024-07-15 13:56:30.010315] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.672 [2024-07-15 13:56:30.020780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.030331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.030376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.030397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.030407] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.030417] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.040649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.050406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.050448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.050466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.050476] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.050485] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.060834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.070300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.070342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.070359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.070372] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.070381] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.080745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.090506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.090541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.090558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.090573] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.090582] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.100791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.110593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.110633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.110651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.110660] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.110669] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.120921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.130633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.130674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.130691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.130700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.130709] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.140902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.150659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.150697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.150714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.150724] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.150733] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.161119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.170712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.170747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.170764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.170773] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.170782] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.673 [2024-07-15 13:56:30.181056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.673 qpair failed and we were unable to recover it. 00:26:03.673 [2024-07-15 13:56:30.190754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.673 [2024-07-15 13:56:30.190796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.673 [2024-07-15 13:56:30.190813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.673 [2024-07-15 13:56:30.190823] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.673 [2024-07-15 13:56:30.190831] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.932 [2024-07-15 13:56:30.200925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.932 qpair failed and we were unable to recover it. 00:26:03.932 [2024-07-15 13:56:30.210873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.932 [2024-07-15 13:56:30.210913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.932 [2024-07-15 13:56:30.210930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.932 [2024-07-15 13:56:30.210940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.932 [2024-07-15 13:56:30.210949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.932 [2024-07-15 13:56:30.220970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.932 qpair failed and we were unable to recover it. 00:26:03.932 [2024-07-15 13:56:30.230890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.932 [2024-07-15 13:56:30.230926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.932 [2024-07-15 13:56:30.230943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.932 [2024-07-15 13:56:30.230953] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.932 [2024-07-15 13:56:30.230961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.932 [2024-07-15 13:56:30.241260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.932 qpair failed and we were unable to recover it. 00:26:03.932 [2024-07-15 13:56:30.250931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.932 [2024-07-15 13:56:30.250964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.250985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.250994] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.251003] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.261373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.271029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.271069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.271086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.271096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.271105] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.281264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.291137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.291178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.291196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.291206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.291215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.301461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.311066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.311105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.311122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.311131] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.311140] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.321636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.331262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.331298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.331315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.331325] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.331337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.341674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.351250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.351292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.351309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.351318] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.351327] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.361516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.371273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.371315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.371332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.371342] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.371350] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.381838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.391354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.391389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.391407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.391417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.391426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.401774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.411341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.411379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.411396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.411406] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.411415] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.421601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.431512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.431552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.431574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.431583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.431592] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:03.933 [2024-07-15 13:56:30.441859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:03.933 qpair failed and we were unable to recover it. 00:26:03.933 [2024-07-15 13:56:30.451503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.933 [2024-07-15 13:56:30.451542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.933 [2024-07-15 13:56:30.451561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.933 [2024-07-15 13:56:30.451575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.933 [2024-07-15 13:56:30.451584] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.461895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-07-15 13:56:30.471635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.191 [2024-07-15 13:56:30.471679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.191 [2024-07-15 13:56:30.471696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.191 [2024-07-15 13:56:30.471705] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.191 [2024-07-15 13:56:30.471714] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.481931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-07-15 13:56:30.491639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.191 [2024-07-15 13:56:30.491673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.191 [2024-07-15 13:56:30.491690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.191 [2024-07-15 13:56:30.491699] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.191 [2024-07-15 13:56:30.491708] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.502013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-07-15 13:56:30.511726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.191 [2024-07-15 13:56:30.511767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.191 [2024-07-15 13:56:30.511784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.191 [2024-07-15 13:56:30.511797] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.191 [2024-07-15 13:56:30.511805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.521976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-07-15 13:56:30.531738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.191 [2024-07-15 13:56:30.531774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.191 [2024-07-15 13:56:30.531791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.191 [2024-07-15 13:56:30.531800] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.191 [2024-07-15 13:56:30.531809] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.542178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.191 qpair failed and we were unable to recover it. 00:26:04.191 [2024-07-15 13:56:30.551804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.191 [2024-07-15 13:56:30.551839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.191 [2024-07-15 13:56:30.551856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.191 [2024-07-15 13:56:30.551865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.191 [2024-07-15 13:56:30.551874] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.191 [2024-07-15 13:56:30.562133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.571921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.571962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.571979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.571989] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.571998] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.582268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.591916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.591956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.591973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.591982] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.591992] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.602209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.612049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.612090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.612107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.612116] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.612125] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.622239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.632065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.632102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.632119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.632129] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.632138] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.642501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.652075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.652108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.652125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.652134] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.652143] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.662622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.672178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.672217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.672233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.672242] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.672251] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.682497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.692247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.692292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.692313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.692322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.692331] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.192 [2024-07-15 13:56:30.702503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.192 qpair failed and we were unable to recover it. 00:26:04.192 [2024-07-15 13:56:30.712287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.192 [2024-07-15 13:56:30.712323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.192 [2024-07-15 13:56:30.712340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.192 [2024-07-15 13:56:30.712350] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.192 [2024-07-15 13:56:30.712358] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.450 [2024-07-15 13:56:30.722634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.450 qpair failed and we were unable to recover it. 00:26:04.450 [2024-07-15 13:56:30.732505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.450 [2024-07-15 13:56:30.732543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.450 [2024-07-15 13:56:30.732559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.450 [2024-07-15 13:56:30.732573] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.450 [2024-07-15 13:56:30.732581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.450 [2024-07-15 13:56:30.742682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.450 qpair failed and we were unable to recover it. 00:26:04.450 [2024-07-15 13:56:30.752412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.450 [2024-07-15 13:56:30.752452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.450 [2024-07-15 13:56:30.752469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.450 [2024-07-15 13:56:30.752479] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.450 [2024-07-15 13:56:30.752487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.450 [2024-07-15 13:56:30.762874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.450 qpair failed and we were unable to recover it. 00:26:04.450 [2024-07-15 13:56:30.772466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.450 [2024-07-15 13:56:30.772514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.772531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.772541] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.772553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.782873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.792544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.792589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.792606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.792616] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.792625] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.802869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.812608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.812650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.812667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.812677] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.812686] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.823054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.832610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.832650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.832667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.832676] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.832685] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.843035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.852731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.852770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.852787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.852796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.852805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.863074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.872787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.872826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.872843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.872853] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.872861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.883253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.892821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.892858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.892875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.892885] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.892894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.903162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.912875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.912916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.912933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.912943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.912952] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.923374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.932884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.932928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.932945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.932954] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.932963] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.943417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.953061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.953100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.953118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.953131] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.953140] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.451 [2024-07-15 13:56:30.963459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.451 qpair failed and we were unable to recover it. 00:26:04.451 [2024-07-15 13:56:30.973065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.451 [2024-07-15 13:56:30.973102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.451 [2024-07-15 13:56:30.973119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.451 [2024-07-15 13:56:30.973128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.451 [2024-07-15 13:56:30.973136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:30.983625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:30.993154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:30.993192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:30.993208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:30.993218] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:30.993227] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.003568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.013273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.013316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.013332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.013342] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.013351] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.023521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.033255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.033296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.033313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.033322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.033331] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.043548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.053363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.053397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.053414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.053424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.053433] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.063719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.073437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.073477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.073494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.073504] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.073513] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.083802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.093618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.093657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.093675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.093684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.093693] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.103936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.113596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.113631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.113648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.113658] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.113666] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.123919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.133752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.133787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.133808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.133818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.133827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.143792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.153745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.153786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.153803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.153813] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.153821] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.164021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.173807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.173848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.173865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.173874] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.173883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.184055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.193775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.193814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.193832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.193842] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.193851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.203956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.213989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.214025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.710 [2024-07-15 13:56:31.214042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.710 [2024-07-15 13:56:31.214052] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.710 [2024-07-15 13:56:31.214064] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.710 [2024-07-15 13:56:31.224045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.710 qpair failed and we were unable to recover it. 00:26:04.710 [2024-07-15 13:56:31.233819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.710 [2024-07-15 13:56:31.233882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.711 [2024-07-15 13:56:31.233900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.711 [2024-07-15 13:56:31.233910] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.711 [2024-07-15 13:56:31.233920] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.968 [2024-07-15 13:56:31.244137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.968 qpair failed and we were unable to recover it. 00:26:04.968 [2024-07-15 13:56:31.254027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.968 [2024-07-15 13:56:31.254067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.968 [2024-07-15 13:56:31.254084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.254093] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.254102] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.264283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.274058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.274097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.274114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.274123] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.274132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.284273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.294101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.294138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.294155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.294165] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.294173] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.304357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.314150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.314197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.314213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.314223] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.314232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.324475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.334298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.334344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.334362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.334371] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.334380] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.344462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.354228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.354269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.354286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.354296] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.354305] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.364616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.374283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.374319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.374336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.374345] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.374354] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.384530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.394353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.394394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.394411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.394424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.394433] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.404640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.414377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.414422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.414439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.414448] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.414457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.424807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.434410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.434451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.434467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.434477] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.434486] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.444738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.454443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.454481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.454498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.454507] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.454516] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.464708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:04.969 [2024-07-15 13:56:31.474568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.969 [2024-07-15 13:56:31.474608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.969 [2024-07-15 13:56:31.474624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.969 [2024-07-15 13:56:31.474634] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.969 [2024-07-15 13:56:31.474642] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:04.969 [2024-07-15 13:56:31.484961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.969 qpair failed and we were unable to recover it. 00:26:05.227 [2024-07-15 13:56:31.494745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.227 [2024-07-15 13:56:31.494799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.227 [2024-07-15 13:56:31.494818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.227 [2024-07-15 13:56:31.494828] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.227 [2024-07-15 13:56:31.494838] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.227 [2024-07-15 13:56:31.504979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.227 qpair failed and we were unable to recover it. 00:26:05.227 [2024-07-15 13:56:31.514699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.227 [2024-07-15 13:56:31.514739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.514756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.514765] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.514774] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.525073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.534793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.534833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.534851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.534861] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.534870] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.544993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.554790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.554829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.554845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.554855] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.554864] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.565260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.574760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.574799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.574819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.574829] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.574837] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.585180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.594969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.595011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.595028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.595038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.595047] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.605223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.615008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.615043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.615060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.615070] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.615078] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.625258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.635027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.635066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.635083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.635092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.635101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.645418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.655042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.655082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.655099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.655108] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.655120] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.665377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.675080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.675121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.675138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.675147] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.675156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.685514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.695155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.695191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.695209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.695218] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.695226] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.705656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.715150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.715190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.715206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.715216] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.715224] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.725671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.228 [2024-07-15 13:56:31.735278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.228 [2024-07-15 13:56:31.735325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.228 [2024-07-15 13:56:31.735342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.228 [2024-07-15 13:56:31.735351] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.228 [2024-07-15 13:56:31.735360] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.228 [2024-07-15 13:56:31.745517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.228 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.755407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.755443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.755460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.755469] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.755478] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.765666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.775430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.775472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.775488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.775498] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.775507] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.785826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.795529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.795573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.795591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.795601] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.795610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.805908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.815484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.815524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.815542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.815551] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.815560] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.826074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.835545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.835586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.835603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.835617] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.835625] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.845930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.855649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.855686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.855704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.855713] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.855722] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.866020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.875782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.875825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.875842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.875851] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.875860] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.886240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.895786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.489 [2024-07-15 13:56:31.895825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.489 [2024-07-15 13:56:31.895842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.489 [2024-07-15 13:56:31.895852] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.489 [2024-07-15 13:56:31.895861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.489 [2024-07-15 13:56:31.906160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.489 qpair failed and we were unable to recover it. 00:26:05.489 [2024-07-15 13:56:31.915818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.490 [2024-07-15 13:56:31.915855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.490 [2024-07-15 13:56:31.915872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.490 [2024-07-15 13:56:31.915881] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.490 [2024-07-15 13:56:31.915890] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.490 [2024-07-15 13:56:31.926417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.490 qpair failed and we were unable to recover it. 00:26:05.490 [2024-07-15 13:56:31.935867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.490 [2024-07-15 13:56:31.935902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.490 [2024-07-15 13:56:31.935919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.490 [2024-07-15 13:56:31.935928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.490 [2024-07-15 13:56:31.935937] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.490 [2024-07-15 13:56:31.946290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.490 qpair failed and we were unable to recover it. 00:26:05.490 [2024-07-15 13:56:31.956033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.490 [2024-07-15 13:56:31.956072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.490 [2024-07-15 13:56:31.956089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.490 [2024-07-15 13:56:31.956099] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.490 [2024-07-15 13:56:31.956107] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.490 [2024-07-15 13:56:31.966363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.490 qpair failed and we were unable to recover it. 00:26:05.490 [2024-07-15 13:56:31.976079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.490 [2024-07-15 13:56:31.976115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.490 [2024-07-15 13:56:31.976132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.490 [2024-07-15 13:56:31.976141] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.490 [2024-07-15 13:56:31.976150] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.490 [2024-07-15 13:56:31.986453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.490 qpair failed and we were unable to recover it. 00:26:05.490 [2024-07-15 13:56:31.996097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.490 [2024-07-15 13:56:31.996133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.490 [2024-07-15 13:56:31.996150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.490 [2024-07-15 13:56:31.996159] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.490 [2024-07-15 13:56:31.996169] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.490 [2024-07-15 13:56:32.006382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.490 qpair failed and we were unable to recover it. 00:26:05.787 [2024-07-15 13:56:32.016204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.787 [2024-07-15 13:56:32.016250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.787 [2024-07-15 13:56:32.016270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.787 [2024-07-15 13:56:32.016279] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.787 [2024-07-15 13:56:32.016288] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.787 [2024-07-15 13:56:32.026664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.787 qpair failed and we were unable to recover it. 00:26:05.787 [2024-07-15 13:56:32.036216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.036261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.036277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.036286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.036295] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.046512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.056192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.056230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.056247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.056257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.056266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.066525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.076424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.076461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.076478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.076487] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.076496] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.086742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.096289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.096322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.096339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.096349] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.096361] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.106760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.116380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.116422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.116439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.116449] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.116457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.126935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.136452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.136489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.136505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.136515] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.136524] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.146914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.156557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.156594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.156611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.156620] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.156629] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.166978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.176632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.176675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.176692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.176701] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.176710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.187046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.196696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.196737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.196755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.196764] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.196773] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.206983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.216783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.216826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.216843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.216852] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.216861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.227274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.236852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.236892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.236909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.236919] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.236927] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.247252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.256886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.256929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.256946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.256955] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.256964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.788 [2024-07-15 13:56:32.267279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.788 qpair failed and we were unable to recover it. 00:26:05.788 [2024-07-15 13:56:32.277032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.788 [2024-07-15 13:56:32.277072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.788 [2024-07-15 13:56:32.277089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.788 [2024-07-15 13:56:32.277102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.788 [2024-07-15 13:56:32.277111] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:05.789 [2024-07-15 13:56:32.287273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:05.789 qpair failed and we were unable to recover it. 00:26:05.789 [2024-07-15 13:56:32.296972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:05.789 [2024-07-15 13:56:32.297016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:05.789 [2024-07-15 13:56:32.297033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:05.789 [2024-07-15 13:56:32.297042] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:05.789 [2024-07-15 13:56:32.297051] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.307392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.317010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.317055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.317071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.317081] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.317090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.327441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.337104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.337142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.337159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.337168] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.337178] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.347518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.357197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.357236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.357253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.357262] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.357271] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.367620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.377270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.377313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.377330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.377339] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.377348] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.387737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.397207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.397247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.397265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.397275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.397284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.407693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.417267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.417305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.417322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.417332] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.417341] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.427734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.437420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.437462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.437479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.437489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.437498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.447945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.457504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.062 [2024-07-15 13:56:32.457547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.062 [2024-07-15 13:56:32.457571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.062 [2024-07-15 13:56:32.457581] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.062 [2024-07-15 13:56:32.457590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.062 [2024-07-15 13:56:32.467897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.062 qpair failed and we were unable to recover it. 00:26:06.062 [2024-07-15 13:56:32.477510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.477553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.477575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.477586] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.477596] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.063 [2024-07-15 13:56:32.487867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.063 qpair failed and we were unable to recover it. 00:26:06.063 [2024-07-15 13:56:32.497526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.497561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.497592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.497602] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.497610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.063 [2024-07-15 13:56:32.507898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.063 qpair failed and we were unable to recover it. 00:26:06.063 [2024-07-15 13:56:32.517604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.517643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.517660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.517670] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.517678] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.063 [2024-07-15 13:56:32.527910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.063 qpair failed and we were unable to recover it. 00:26:06.063 [2024-07-15 13:56:32.537666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.537706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.537723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.537733] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.537745] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.063 [2024-07-15 13:56:32.547964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.063 qpair failed and we were unable to recover it. 00:26:06.063 [2024-07-15 13:56:32.557809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.557849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.557865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.557875] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.557883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.063 [2024-07-15 13:56:32.567962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.063 qpair failed and we were unable to recover it. 00:26:06.063 [2024-07-15 13:56:32.577848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.063 [2024-07-15 13:56:32.577885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.063 [2024-07-15 13:56:32.577902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.063 [2024-07-15 13:56:32.577912] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.063 [2024-07-15 13:56:32.577921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.588191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.597887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.597931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.597948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.597957] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.597966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.608344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.617915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.617960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.617977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.617986] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.617995] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.628355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.638057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.638101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.638118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.638127] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.638136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.648316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.658077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.658109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.658126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.658136] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.658144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.668538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.678056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.678097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.678114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.678124] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.678133] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.688284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.698182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.698225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.698242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.698251] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.698260] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.708647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.718192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.718235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.718251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.718265] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.321 [2024-07-15 13:56:32.718274] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.321 [2024-07-15 13:56:32.728792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.321 qpair failed and we were unable to recover it. 00:26:06.321 [2024-07-15 13:56:32.738326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.321 [2024-07-15 13:56:32.738363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.321 [2024-07-15 13:56:32.738380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.321 [2024-07-15 13:56:32.738390] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.738399] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.322 [2024-07-15 13:56:32.748871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.322 qpair failed and we were unable to recover it. 00:26:06.322 [2024-07-15 13:56:32.758345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.322 [2024-07-15 13:56:32.758385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.322 [2024-07-15 13:56:32.758402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.322 [2024-07-15 13:56:32.758412] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.758421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.322 [2024-07-15 13:56:32.768635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.322 qpair failed and we were unable to recover it. 00:26:06.322 [2024-07-15 13:56:32.778356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.322 [2024-07-15 13:56:32.778397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.322 [2024-07-15 13:56:32.778414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.322 [2024-07-15 13:56:32.778424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.778433] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.322 [2024-07-15 13:56:32.788842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.322 qpair failed and we were unable to recover it. 00:26:06.322 [2024-07-15 13:56:32.798570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.322 [2024-07-15 13:56:32.798610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.322 [2024-07-15 13:56:32.798628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.322 [2024-07-15 13:56:32.798638] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.798647] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.322 [2024-07-15 13:56:32.808877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.322 qpair failed and we were unable to recover it. 00:26:06.322 [2024-07-15 13:56:32.818539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.322 [2024-07-15 13:56:32.818585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.322 [2024-07-15 13:56:32.818602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.322 [2024-07-15 13:56:32.818612] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.818621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.322 [2024-07-15 13:56:32.828813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.322 qpair failed and we were unable to recover it. 00:26:06.322 [2024-07-15 13:56:32.838608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.322 [2024-07-15 13:56:32.838649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.322 [2024-07-15 13:56:32.838666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.322 [2024-07-15 13:56:32.838676] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.322 [2024-07-15 13:56:32.838685] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.848847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.858742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.858781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.858798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.858807] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.858816] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.869060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.878776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.878818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.878835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.878844] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.878853] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.888970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.898852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.898888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.898909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.898919] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.898927] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.909139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.918760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.918798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.918816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.918826] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.918835] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.929297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.938941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.938986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.939003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.939013] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.939022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.949145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.959069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.959107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.959125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.959134] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.959143] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.969308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.979073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.979113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.979130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.979140] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.979152] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:32.989322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:32.999127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:32.999168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:32.999185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:32.999195] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:32.999204] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:33.009440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:33.019194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:33.019232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:33.019249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:33.019258] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:33.019267] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:33.029390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:33.039298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:33.039335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:33.039352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:33.039362] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:33.039371] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:33.049601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.580 qpair failed and we were unable to recover it. 00:26:06.580 [2024-07-15 13:56:33.059263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.580 [2024-07-15 13:56:33.059299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.580 [2024-07-15 13:56:33.059316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.580 [2024-07-15 13:56:33.059325] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.580 [2024-07-15 13:56:33.059334] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.580 [2024-07-15 13:56:33.069546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.581 qpair failed and we were unable to recover it. 00:26:06.581 [2024-07-15 13:56:33.079334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.581 [2024-07-15 13:56:33.079374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.581 [2024-07-15 13:56:33.079391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.581 [2024-07-15 13:56:33.079400] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.581 [2024-07-15 13:56:33.079409] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.581 [2024-07-15 13:56:33.089583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.581 qpair failed and we were unable to recover it. 00:26:06.581 [2024-07-15 13:56:33.099370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.581 [2024-07-15 13:56:33.099408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.581 [2024-07-15 13:56:33.099425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.581 [2024-07-15 13:56:33.099435] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.581 [2024-07-15 13:56:33.099444] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.838 [2024-07-15 13:56:33.109716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.838 qpair failed and we were unable to recover it. 00:26:06.838 [2024-07-15 13:56:33.119478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.838 [2024-07-15 13:56:33.119513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.838 [2024-07-15 13:56:33.119529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.838 [2024-07-15 13:56:33.119539] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.838 [2024-07-15 13:56:33.119547] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.838 [2024-07-15 13:56:33.129646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.838 qpair failed and we were unable to recover it. 00:26:06.838 [2024-07-15 13:56:33.139479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.839 [2024-07-15 13:56:33.139521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.839 [2024-07-15 13:56:33.139538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.839 [2024-07-15 13:56:33.139548] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.839 [2024-07-15 13:56:33.139556] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.839 [2024-07-15 13:56:33.149896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.839 qpair failed and we were unable to recover it. 00:26:06.839 [2024-07-15 13:56:33.159468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:06.839 [2024-07-15 13:56:33.159507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:06.839 [2024-07-15 13:56:33.159527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:06.839 [2024-07-15 13:56:33.159537] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:06.839 [2024-07-15 13:56:33.159546] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:06.839 [2024-07-15 13:56:33.169714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.839 qpair failed and we were unable to recover it. 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Write completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.769 Read completed with error (sct=0, sc=8) 00:26:07.769 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Read completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 Write completed with error (sct=0, sc=8) 00:26:07.770 starting I/O failed 00:26:07.770 [2024-07-15 13:56:34.174713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:07.770 [2024-07-15 13:56:34.182255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.182302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.182322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.182332] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.182342] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:26:07.770 [2024-07-15 13:56:34.192903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.202504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.202543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.202560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.202578] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.202587] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:26:07.770 [2024-07-15 13:56:34.212924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.223011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.223073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.223148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.223194] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.223237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:07.770 [2024-07-15 13:56:34.233204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.242851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.242905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.242944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.242970] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.242996] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:07.770 [2024-07-15 13:56:34.253138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.253280] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:07.770 A controller has encountered a failure and is being reset. 00:26:07.770 [2024-07-15 13:56:34.263169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.263234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.263296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.263331] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.263363] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:07.770 [2024-07-15 13:56:34.273292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.282889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:07.770 [2024-07-15 13:56:34.282940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:07.770 [2024-07-15 13:56:34.282972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:07.770 [2024-07-15 13:56:34.282991] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:07.770 [2024-07-15 13:56:34.283008] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:07.770 [2024-07-15 13:56:34.293321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:07.770 qpair failed and we were unable to recover it. 00:26:07.770 [2024-07-15 13:56:34.293448] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:08.028 [2024-07-15 13:56:34.324038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:08.028 Controller properly reset. 00:26:08.028 Initializing NVMe Controllers 00:26:08.028 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.028 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:08.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:08.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:08.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:08.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:08.028 Initialization complete. Launching workers. 00:26:08.028 Starting thread on core 1 00:26:08.028 Starting thread on core 2 00:26:08.028 Starting thread on core 3 00:26:08.028 Starting thread on core 0 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:08.028 00:26:08.028 real 0m12.636s 00:26:08.028 user 0m27.183s 00:26:08.028 sys 0m3.221s 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.028 ************************************ 00:26:08.028 END TEST nvmf_target_disconnect_tc2 00:26:08.028 ************************************ 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:08.028 ************************************ 00:26:08.028 START TEST nvmf_target_disconnect_tc3 00:26:08.028 ************************************ 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2604033 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:26:08.028 13:56:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:26:08.028 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.554 13:56:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2603028 00:26:10.554 13:56:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:26:11.489 Read completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Read completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Write completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Write completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Read completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Write completed with error (sct=0, sc=8) 00:26:11.489 starting I/O failed 00:26:11.489 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Write completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 Read completed with error (sct=0, sc=8) 00:26:11.490 starting I/O failed 00:26:11.490 [2024-07-15 13:56:37.680063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:12.056 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2603028 Killed "${NVMF_APP[@]}" "$@" 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2604580 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2604580 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2604580 ']' 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.056 13:56:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.056 [2024-07-15 13:56:38.547343] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:12.056 [2024-07-15 13:56:38.547399] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.056 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.315 [2024-07-15 13:56:38.631357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Write completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 Read completed with error (sct=0, sc=8) 00:26:12.315 starting I/O failed 00:26:12.315 [2024-07-15 13:56:38.685109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:12.315 [2024-07-15 13:56:38.714549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.315 [2024-07-15 13:56:38.714596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.315 [2024-07-15 13:56:38.714605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.315 [2024-07-15 13:56:38.714614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.315 [2024-07-15 13:56:38.714621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.315 [2024-07-15 13:56:38.714741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:12.315 [2024-07-15 13:56:38.714839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:12.315 [2024-07-15 13:56:38.714940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:12.315 [2024-07-15 13:56:38.714942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.880 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 Malloc0 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 [2024-07-15 13:56:39.452988] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xde6200/0xdf1f80) succeed. 00:26:13.138 [2024-07-15 13:56:39.462846] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xde7840/0xe33610) succeed. 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 [2024-07-15 13:56:39.615454] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.138 13:56:39 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2604033 00:26:13.395 Write completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Read completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Write completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Read completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Write completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Write completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Read completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Read completed with error (sct=0, sc=8) 00:26:13.395 starting I/O failed 00:26:13.395 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Read completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 Write completed with error (sct=0, sc=8) 00:26:13.396 starting I/O failed 00:26:13.396 [2024-07-15 13:56:39.690316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.396 [2024-07-15 13:56:39.691916] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:13.396 [2024-07-15 13:56:39.691935] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:13.396 [2024-07-15 13:56:39.691944] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:14.327 [2024-07-15 13:56:40.695896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.327 qpair failed and we were unable to recover it. 00:26:14.327 [2024-07-15 13:56:40.697399] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:14.327 [2024-07-15 13:56:40.697419] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:14.327 [2024-07-15 13:56:40.697428] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:15.258 [2024-07-15 13:56:41.701301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.258 qpair failed and we were unable to recover it. 00:26:15.258 [2024-07-15 13:56:41.702733] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:15.258 [2024-07-15 13:56:41.702750] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:15.258 [2024-07-15 13:56:41.702758] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:16.186 [2024-07-15 13:56:42.706600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:16.186 qpair failed and we were unable to recover it. 00:26:16.186 [2024-07-15 13:56:42.708024] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:16.186 [2024-07-15 13:56:42.708041] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:16.186 [2024-07-15 13:56:42.708049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:17.558 [2024-07-15 13:56:43.711829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:17.558 qpair failed and we were unable to recover it. 00:26:17.558 [2024-07-15 13:56:43.713211] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:17.558 [2024-07-15 13:56:43.713228] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:17.558 [2024-07-15 13:56:43.713237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:18.488 [2024-07-15 13:56:44.717021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:18.488 qpair failed and we were unable to recover it. 00:26:18.488 [2024-07-15 13:56:44.718371] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:18.488 [2024-07-15 13:56:44.718389] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:18.488 [2024-07-15 13:56:44.718397] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:19.417 [2024-07-15 13:56:45.722201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:19.417 qpair failed and we were unable to recover it. 00:26:19.417 [2024-07-15 13:56:45.723693] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:19.417 [2024-07-15 13:56:45.723710] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:19.417 [2024-07-15 13:56:45.723718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:20.347 [2024-07-15 13:56:46.727483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.347 qpair failed and we were unable to recover it. 00:26:20.347 [2024-07-15 13:56:46.729515] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:20.347 [2024-07-15 13:56:46.729594] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:20.347 [2024-07-15 13:56:46.729637] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:21.279 [2024-07-15 13:56:47.733431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:21.279 qpair failed and we were unable to recover it. 00:26:21.279 [2024-07-15 13:56:47.734907] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:21.279 [2024-07-15 13:56:47.734927] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:21.279 [2024-07-15 13:56:47.734939] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:22.650 [2024-07-15 13:56:48.738795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:22.650 qpair failed and we were unable to recover it. 00:26:22.650 [2024-07-15 13:56:48.740645] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:22.650 [2024-07-15 13:56:48.740707] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:22.650 [2024-07-15 13:56:48.740736] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:23.581 [2024-07-15 13:56:49.744544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:23.581 qpair failed and we were unable to recover it. 00:26:23.581 [2024-07-15 13:56:49.745906] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:23.581 [2024-07-15 13:56:49.745923] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:23.581 [2024-07-15 13:56:49.745935] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:24.512 [2024-07-15 13:56:50.749806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:24.512 qpair failed and we were unable to recover it. 00:26:24.512 [2024-07-15 13:56:50.749934] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:24.512 A controller has encountered a failure and is being reset. 00:26:24.512 Resorting to new failover address 192.168.100.9 00:26:24.512 [2024-07-15 13:56:50.750038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.512 [2024-07-15 13:56:50.750109] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:24.512 [2024-07-15 13:56:50.751987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:24.512 Controller properly reset. 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Read completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 Write completed with error (sct=0, sc=8) 00:26:25.444 starting I/O failed 00:26:25.444 [2024-07-15 13:56:51.797726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:25.444 Initializing NVMe Controllers 00:26:25.444 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.444 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:25.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:25.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:25.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:25.444 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:25.444 Initialization complete. Launching workers. 00:26:25.444 Starting thread on core 1 00:26:25.444 Starting thread on core 2 00:26:25.444 Starting thread on core 3 00:26:25.444 Starting thread on core 0 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:26:25.444 00:26:25.444 real 0m17.367s 00:26:25.444 user 0m59.762s 00:26:25.444 sys 0m5.540s 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:25.444 ************************************ 00:26:25.444 END TEST nvmf_target_disconnect_tc3 00:26:25.444 ************************************ 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:25.444 rmmod nvme_rdma 00:26:25.444 rmmod nvme_fabrics 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2604580 ']' 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2604580 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2604580 ']' 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2604580 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.444 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2604580 00:26:25.702 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:25.702 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:25.702 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2604580' 00:26:25.702 killing process with pid 2604580 00:26:25.702 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2604580 00:26:25.702 13:56:51 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2604580 00:26:25.961 13:56:52 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:25.961 13:56:52 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:25.961 00:26:25.961 real 0m38.780s 00:26:25.961 user 2m23.619s 00:26:25.961 sys 0m14.749s 00:26:25.961 13:56:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.961 13:56:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:25.961 ************************************ 00:26:25.961 END TEST nvmf_target_disconnect 00:26:25.961 ************************************ 00:26:25.961 13:56:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:25.961 13:56:52 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:25.961 13:56:52 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:25.961 13:56:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:25.961 13:56:52 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:25.961 00:26:25.961 real 18m53.208s 00:26:25.961 user 44m5.921s 00:26:25.961 sys 5m30.619s 00:26:25.961 13:56:52 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.961 13:56:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:25.961 ************************************ 00:26:25.961 END TEST nvmf_rdma 00:26:25.961 ************************************ 00:26:25.961 13:56:52 -- common/autotest_common.sh@1142 -- # return 0 00:26:25.961 13:56:52 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:26:25.961 13:56:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:25.961 13:56:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.961 13:56:52 -- common/autotest_common.sh@10 -- # set +x 00:26:26.220 ************************************ 00:26:26.220 START TEST spdkcli_nvmf_rdma 00:26:26.220 ************************************ 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:26:26.220 * Looking for test storage... 00:26:26.220 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809f3706-e051-e711-906e-0017a4403562 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=809f3706-e051-e711-906e-0017a4403562 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.220 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2606508 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2606508 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 2606508 ']' 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.221 13:56:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:26.221 [2024-07-15 13:56:52.692572] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 24.03.0 initialization... 00:26:26.221 [2024-07-15 13:56:52.692636] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606508 ] 00:26:26.221 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.479 [2024-07-15 13:56:52.774652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:26.479 [2024-07-15 13:56:52.855935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.480 [2024-07-15 13:56:52.855935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:26:27.044 13:56:53 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:26:33.658 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:26:33.658 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.658 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:26:33.659 Found net devices under 0000:18:00.0: mlx_0_0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:26:33.659 Found net devices under 0000:18:00.1: mlx_0_1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:33.659 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:33.659 link/ether 50:6b:4b:4b:c9:ae brd ff:ff:ff:ff:ff:ff 00:26:33.659 altname enp24s0f0np0 00:26:33.659 altname ens785f0np0 00:26:33.659 inet 192.168.100.8/24 scope global mlx_0_0 00:26:33.659 valid_lft forever preferred_lft forever 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:33.659 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:33.659 link/ether 50:6b:4b:4b:c9:af brd ff:ff:ff:ff:ff:ff 00:26:33.659 altname enp24s0f1np1 00:26:33.659 altname ens785f1np1 00:26:33.659 inet 192.168.100.9/24 scope global mlx_0_1 00:26:33.659 valid_lft forever preferred_lft forever 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:33.659 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:33.919 192.168.100.9' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:33.919 192.168.100.9' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:33.919 192.168.100.9' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:33.919 13:57:00 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:33.919 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:33.919 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:33.919 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:33.919 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:33.919 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:33.919 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:33.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:33.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:33.919 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:33.919 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:33.919 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:33.919 ' 00:26:36.454 [2024-07-15 13:57:02.880559] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb20f10/0x9a7a80) succeed. 00:26:36.454 [2024-07-15 13:57:02.890558] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb224b0/0xa92b00) succeed. 00:26:37.831 [2024-07-15 13:57:04.242673] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:26:40.365 [2024-07-15 13:57:06.630201] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:26:42.270 [2024-07-15 13:57:08.689058] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:44.172 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:44.172 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:44.172 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:44.172 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:44.172 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:44.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:44.172 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:44.172 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:26:44.173 13:57:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:44.431 13:57:10 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:44.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:44.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:44.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:44.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:26:44.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:26:44.431 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:44.431 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:44.431 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:44.431 ' 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:26:49.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:26:49.701 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:49.701 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:49.701 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2606508 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 2606508 ']' 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 2606508 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2606508 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2606508' 00:26:49.960 killing process with pid 2606508 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 2606508 00:26:49.960 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 2606508 00:26:50.219 13:57:16 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:26:50.219 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:50.220 rmmod nvme_rdma 00:26:50.220 rmmod nvme_fabrics 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:50.220 00:26:50.220 real 0m24.216s 00:26:50.220 user 0m52.906s 00:26:50.220 sys 0m6.301s 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.220 13:57:16 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:50.220 ************************************ 00:26:50.220 END TEST spdkcli_nvmf_rdma 00:26:50.220 ************************************ 00:26:50.478 13:57:16 -- common/autotest_common.sh@1142 -- # return 0 00:26:50.478 13:57:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:26:50.478 13:57:16 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:26:50.478 13:57:16 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:26:50.478 13:57:16 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:26:50.478 13:57:16 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:26:50.478 13:57:16 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:26:50.478 13:57:16 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:26:50.478 13:57:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:50.478 13:57:16 -- common/autotest_common.sh@10 -- # set +x 00:26:50.478 13:57:16 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:26:50.478 13:57:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:50.478 13:57:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:50.478 13:57:16 -- common/autotest_common.sh@10 -- # set +x 00:26:55.754 INFO: APP EXITING 00:26:55.754 INFO: killing all VMs 00:26:55.754 INFO: killing vhost app 00:26:55.754 INFO: EXIT DONE 00:26:58.285 Waiting for block devices as requested 00:26:58.285 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:26:58.285 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:58.543 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:58.543 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:58.543 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:58.802 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:58.802 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:58.802 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:59.060 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:59.060 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:59.060 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:59.319 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:59.319 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:59.319 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:59.579 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:59.579 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:59.579 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:02.970 Cleaning 00:27:02.970 Removing: /var/run/dpdk/spdk0/config 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:02.970 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:02.970 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:02.970 Removing: /var/run/dpdk/spdk1/config 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:02.970 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:02.970 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:02.970 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:02.970 Removing: /var/run/dpdk/spdk2/config 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:02.970 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:03.229 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:03.229 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:03.229 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:03.229 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:03.229 Removing: /var/run/dpdk/spdk3/config 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:03.229 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:03.229 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:03.229 Removing: /var/run/dpdk/spdk4/config 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:03.229 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:03.229 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:03.229 Removing: /dev/shm/bdevperf_trace.pid2441324 00:27:03.229 Removing: /dev/shm/bdevperf_trace.pid2534039 00:27:03.229 Removing: /dev/shm/bdev_svc_trace.1 00:27:03.229 Removing: /dev/shm/nvmf_trace.0 00:27:03.229 Removing: /dev/shm/spdk_tgt_trace.pid2343361 00:27:03.229 Removing: /var/run/dpdk/spdk0 00:27:03.229 Removing: /var/run/dpdk/spdk1 00:27:03.229 Removing: /var/run/dpdk/spdk2 00:27:03.229 Removing: /var/run/dpdk/spdk3 00:27:03.229 Removing: /var/run/dpdk/spdk4 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2340029 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2341572 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2343361 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2343901 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2344657 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2344857 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2345636 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2345794 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2345995 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2351415 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2353173 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2353408 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2353660 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2354056 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2354319 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2354520 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2354721 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2354957 00:27:03.229 Removing: /var/run/dpdk/spdk_pid2355560 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2358135 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2358369 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2358583 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2358761 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2359172 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2359350 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2359756 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2359937 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2360151 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2360335 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2360545 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2360575 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361026 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361225 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361468 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361694 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361878 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2361952 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2362150 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2362361 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2362585 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2362842 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2363090 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2363333 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2363530 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2363738 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2363936 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2364141 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2364348 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2364548 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2364754 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2364953 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2365158 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2365366 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2365617 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2365874 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2366116 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2366341 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2366416 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2366735 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2370258 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2407310 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2410931 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2419404 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2423820 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2427285 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2428155 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2433947 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2441324 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2441690 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2445194 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2450251 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2452386 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2461110 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2482729 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2485846 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2531887 00:27:03.511 Removing: /var/run/dpdk/spdk_pid2533134 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2534039 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2537724 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2544000 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2544729 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2545450 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2546180 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2546535 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2550360 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2550366 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2554224 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2554744 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2555129 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2555673 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2555720 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2560459 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2560888 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2564610 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2566809 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2571594 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2580569 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2580630 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2596964 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2597209 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2602127 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2602539 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2604033 00:27:03.770 Removing: /var/run/dpdk/spdk_pid2606508 00:27:03.770 Clean 00:27:03.770 13:57:30 -- common/autotest_common.sh@1451 -- # return 0 00:27:03.770 13:57:30 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:03.770 13:57:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.770 13:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:03.770 13:57:30 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:03.770 13:57:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.770 13:57:30 -- common/autotest_common.sh@10 -- # set +x 00:27:04.030 13:57:30 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:04.030 13:57:30 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:27:04.030 13:57:30 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:27:04.030 13:57:30 -- spdk/autotest.sh@391 -- # hash lcov 00:27:04.030 13:57:30 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:04.030 13:57:30 -- spdk/autotest.sh@393 -- # hostname 00:27:04.030 13:57:30 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-43 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:27:04.030 geninfo: WARNING: invalid characters removed from testname! 00:27:25.994 13:57:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:26.252 13:57:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:28.156 13:57:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:30.071 13:57:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:31.449 13:57:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:33.352 13:57:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:27:34.731 13:58:01 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:34.731 13:58:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:34.731 13:58:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:34.731 13:58:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.731 13:58:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.731 13:58:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.731 13:58:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.731 13:58:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.731 13:58:01 -- paths/export.sh@5 -- $ export PATH 00:27:34.731 13:58:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.731 13:58:01 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:27:34.731 13:58:01 -- common/autobuild_common.sh@444 -- $ date +%s 00:27:34.731 13:58:01 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721044681.XXXXXX 00:27:34.991 13:58:01 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721044681.V4faXr 00:27:34.991 13:58:01 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:27:34.991 13:58:01 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:27:34.991 13:58:01 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:27:34.991 13:58:01 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:34.991 13:58:01 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:34.991 13:58:01 -- common/autobuild_common.sh@460 -- $ get_config_params 00:27:34.991 13:58:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:27:34.991 13:58:01 -- common/autotest_common.sh@10 -- $ set +x 00:27:34.991 13:58:01 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:27:34.991 13:58:01 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:27:34.991 13:58:01 -- pm/common@17 -- $ local monitor 00:27:34.991 13:58:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:34.991 13:58:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:34.991 13:58:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:34.991 13:58:01 -- pm/common@21 -- $ date +%s 00:27:34.991 13:58:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:34.991 13:58:01 -- pm/common@21 -- $ date +%s 00:27:34.991 13:58:01 -- pm/common@25 -- $ sleep 1 00:27:34.991 13:58:01 -- pm/common@21 -- $ date +%s 00:27:34.991 13:58:01 -- pm/common@21 -- $ date +%s 00:27:34.991 13:58:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044681 00:27:34.991 13:58:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044681 00:27:34.991 13:58:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044681 00:27:34.991 13:58:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721044681 00:27:34.991 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044681_collect-vmstat.pm.log 00:27:34.991 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044681_collect-cpu-load.pm.log 00:27:34.991 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044681_collect-cpu-temp.pm.log 00:27:34.991 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721044681_collect-bmc-pm.bmc.pm.log 00:27:35.927 13:58:02 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:27:35.927 13:58:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:27:35.927 13:58:02 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:35.927 13:58:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:35.927 13:58:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:35.927 13:58:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:35.927 13:58:02 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:35.927 13:58:02 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:35.928 13:58:02 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:35.928 13:58:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:35.928 13:58:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:35.928 13:58:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:35.928 13:58:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:35.928 13:58:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.928 13:58:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:35.928 13:58:02 -- pm/common@44 -- $ pid=2620280 00:27:35.928 13:58:02 -- pm/common@50 -- $ kill -TERM 2620280 00:27:35.928 13:58:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.928 13:58:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:35.928 13:58:02 -- pm/common@44 -- $ pid=2620282 00:27:35.928 13:58:02 -- pm/common@50 -- $ kill -TERM 2620282 00:27:35.928 13:58:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.928 13:58:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:35.928 13:58:02 -- pm/common@44 -- $ pid=2620284 00:27:35.928 13:58:02 -- pm/common@50 -- $ kill -TERM 2620284 00:27:35.928 13:58:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.928 13:58:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:35.928 13:58:02 -- pm/common@44 -- $ pid=2620305 00:27:35.928 13:58:02 -- pm/common@50 -- $ sudo -E kill -TERM 2620305 00:27:35.928 + [[ -n 2240600 ]] 00:27:35.928 + sudo kill 2240600 00:27:36.013 [Pipeline] } 00:27:36.030 [Pipeline] // stage 00:27:36.035 [Pipeline] } 00:27:36.053 [Pipeline] // timeout 00:27:36.058 [Pipeline] } 00:27:36.074 [Pipeline] // catchError 00:27:36.078 [Pipeline] } 00:27:36.095 [Pipeline] // wrap 00:27:36.100 [Pipeline] } 00:27:36.115 [Pipeline] // catchError 00:27:36.125 [Pipeline] stage 00:27:36.127 [Pipeline] { (Epilogue) 00:27:36.141 [Pipeline] catchError 00:27:36.143 [Pipeline] { 00:27:36.158 [Pipeline] echo 00:27:36.160 Cleanup processes 00:27:36.166 [Pipeline] sh 00:27:36.482 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:36.482 2620392 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:27:36.482 2620609 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:36.496 [Pipeline] sh 00:27:36.779 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:27:36.779 ++ grep -v 'sudo pgrep' 00:27:36.779 ++ awk '{print $1}' 00:27:36.779 + sudo kill -9 2620392 00:27:36.790 [Pipeline] sh 00:27:37.072 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:45.209 [Pipeline] sh 00:27:45.491 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:45.491 Artifacts sizes are good 00:27:45.504 [Pipeline] archiveArtifacts 00:27:45.510 Archiving artifacts 00:27:45.638 [Pipeline] sh 00:27:45.922 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:27:45.936 [Pipeline] cleanWs 00:27:45.945 [WS-CLEANUP] Deleting project workspace... 00:27:45.945 [WS-CLEANUP] Deferred wipeout is used... 00:27:45.952 [WS-CLEANUP] done 00:27:45.953 [Pipeline] } 00:27:45.973 [Pipeline] // catchError 00:27:45.985 [Pipeline] sh 00:27:46.289 + logger -p user.info -t JENKINS-CI 00:27:46.297 [Pipeline] } 00:27:46.312 [Pipeline] // stage 00:27:46.316 [Pipeline] } 00:27:46.331 [Pipeline] // node 00:27:46.336 [Pipeline] End of Pipeline 00:27:46.367 Finished: SUCCESS